IJIGSP Vol. 14, No. 4, Aug. 2022
Cover page and Table of Contents: PDF (size: 721KB)
Method of symmetric component is used in analysis of disturbances (short circuits and disturbances) and can be verified by computer simulation and measurement. It is based on possibility of making calculations simple by separating a three-phase asymmetric system into three symmetric systems and three single-phase schemes. It is very important for three-phase electrical networks with linear parameters and the same frequency in the network. The transition of quantities (ems, voltages and currents ) from the asymmetric domain of a three-phase system to the symmetric domain is performed using transformation matrices. Expressions determined in the system of symmetric components are then superimposed on expressions corresponding to conditions of asymmetric system, and superposition is correct if electric quantities are of simple-periodic functions.
The paper presents a new method based on analysis using symmetric component methods and diagnostic algorithms for the assessment of the most common disturbances in power grids. The adapted part of the MATLAB package psb.abc,part.mdl was used for method verification, and the obtained results in the form of diagrams and values of diagnostic functions arranged in the form of tables confirm the applicability of the proposed new diagnostic algorithm for analysis and assessment of steady states and disturbances in electrical networks. The proposed diagnostic algorithm enables the realization of the maximum number of diagnostic functions on the basis of which a scheme for diagnosing disorders with classical diode elements or a more modern scheme with microprocessor components can be realized.
To prevent medical data leakage to third parties, algorithm developers have enhanced and modified existing models and tightened the cloud security through complex processes. This research utilizes PlayFair and K-Means clustering algorithm as double-level encryption/ decryption technique with ArnoldCat maps towards securing the medical images in cloud. K-Means is used for segmenting images into pixels and auto-encoders to remove noise (de-noising); the Random Forest regressor, tree-method based ensemble model is used for classification. The study obtained CT scan-images as datasets from ‘Kaggle’ and classifies the images into ‘Non-Covid’ and ‘Covid’ categories. The software utilized is Jupyter-Notebook, in Python. PSNR with MSE evaluation metrics is done using Python. Through testing-and-training datasets, lower MSE score (‘0’) and higher PSNR score (60%) were obtained, stating that, the developed decryption/ encryption model is a good fit that enhances cloud security to preserve digital medical images.[...] Read more.
Recommender Systems are systems that aid users in finding relevant items, products, or services, usually in an online setting. Collaborative Filtering is the most popular approach for building recommender system due to its superior performance. There are several collaborative filtering methods developed, however, all of them have an inherent problem of data sparsity. Covering Reduction Collaborative Filtering (CRCF) is a new collaborative filtering method developed to solve the problem. CRCF has a key feature called popular items extraction algorithm which produces a list of items with the most ratings, however, the algorithm fails in a denser dataset because it allows any item to be in the list. Likewise, the algorithm does not consider the rating values of items while considering the popular items. These make it to produce less accurate recommendation. This research extends CRCF by developing a new popular item extraction algorithm that removes items with low modal ratings and similarly utilizes the rating values in considering the popular items. This newly developed method is incorporated in CRCF and the new method is called Improved Popular Items Extraction for Covering Reduction Collaborative Filtering (ICRCF). Experiment was conducted on Movielens-1M and Movielens-10M datasets using precision, recall and f1-score as performance metrics. The result of the experiment shows that the new method, ICRCF provides a better recommendation than the base method CRCF in all the performance metrics. Furthermore, the new method is able to perform well both at higher and lower levels of sparsity.[...] Read more.
Failure modeling is an essential component of reliability engineering. Enhanced failure rate modeling techniques are vital to the effective development of predictive and analytical methodologies, demonstration of the engineering procedure, allocation of procedures, design, and control of procedures. However, failure rate modeling has not been given adequate treatment in the literature. The need to investigate failure rate modeling leveraging cutting-edge techniques cannot be overemphasized. This paper proposed and applied a joint support vector regression (SVR) and wavelet transform (WT) approach termed (WT-SVR) to training and learning the call failures rate in wireless system networks. The wavelet transform has been accomplished using the wavelet compression sensing technique. In this technique, the standardized call failure rate data first go through a wavelet filtering transformation matrix. This is followed by separating and outputting the transformed filtered components in the compression phase. Finally, the transformed filtered output components were trained and evaluated using the SVR based on statistical learning theory. The resultant outcome revealed that the proposed WT-SVR learning method is by far better than using only the SVR method for call rate prognostic analysis. As a case in point, the WT-SVR attained STD values of 0.12, 0.21, 2.32, 0.22, 0.90, 0.81 and 0.34 on call failure data estimation compared to the basic SVR that attained higher STD values of 0.45, 0.98, 0.99, 0.46, 1.44, 2.32 and 3.22, respectively.[...] Read more.
This paper presents a natural language text description from video content activities. Here it analyzes the content of any video to identify the number of objects in that video content, what actions and activities are going on has to track and match the action model then based on that generate the grammatical correct text description in English is discussed. It uses two approaches, training, and testing. In the training, we need to maintain a database i.e. subject-verb and object are assigned to extract features of images, and the second approach called testing will automatically generate text descriptions from video content. The implemented system will translate complex video contents into text descriptions and by the duration of a one-minute video with three different object considerations. For this evaluation, a standard DB of YouTube is considered where 250 samples from 50 different domains. The overall system gives an accuracy of 93%.[...] Read more.
We are proposing a unique novel algorithm for tracking human face(s) in different background video sequences. In the beginning, Eigen features and corner points are extracted from the detected face(s). HOG (Histograms of Oriented Gradients) features are isolated from corner points. Eigen and HOG features are combined together. Using these combined features, point tracker keeps track of the face(s) in the frames of the video sequence. Proposed algorithm is being tested on challenging datasets video sequences with technical challenges such as partial occlusion (e.g. moustache, beard, spectacles, helmet, headscarf etc.), changes in expression, variations in illumination and pose; and measured for performance using standard metrics such as accuracy, precision, recall and specificity. Experimental results clearly indicate the robustness of the proposed algorithm on all different background challenging video sequences.[...] Read more.