IJIGSP Vol. 9, No. 8, Aug. 2017
Cover page and Table of Contents: PDF (size: 246KB)
Medical data are characterized by complexity, inaccuracy, heterogeneity, the presence of hidden dependencies, often their distributions are unknown. Correlations between factors of disorders, including clinical data, parameters of time series, patient’s subjective assessments have a high complexity that cannot be fully comprehended by humans anymore. This problem is extremely important especially in case of the early detection of disorders. Machine learning methods are very useful for such detection task. Special area of interest is a problem of breathing disorders. In the paper, author demonstrates the potential use of computational intelligence tools for rhinologic data processing. Implementation of supervised learning techniques will allow improving accuracy of disorders detection as well as decrease medical insurance company expenses. Proposed intelligent-based approach makes it possible to process a variety of heterogeneous data in the medical domain. A combination of conventional and fractal features for time series of rhinomanometric data as well as inclusion of hydrodynamic characteristics of nasal breathing process provides the best accuracy. Such approach may be modified for other breathing disorders detection.[...] Read more.
In this paper a new Normalized Least mean square (NLMS) algorithm is proposed by modifying Error-data normalized step-size algorithm (EDNSS). The performance of proposed algorithm is tested for nonstationary signals like speech and Electroencephalogram (EEG). The simulations of above is carried by adding stationary and nonstationary Gaussian noise , with original speech taken from standard IEEE sentence (SP23) of NOIZEUS data base and EEG taken from EEG database (sccn.ucsd.edu). The output of proposed and EDNSS algorithm are measured with excess mean square error (EMSE) in both stationary and non stationary environment. The results can be appreciated that the proposed algorithm gives improved result over EDNSS algorithm and also the speed of convergence is maintained same as other NLMS algorithms.[...] Read more.
In the modern world, digital images play a vital role in a number of applications such as medical field, aerospace and satellite imaging, underwater imaging, etc. These applications use and produce a large number of digital images. Also, these images need to be stored and transmitted for various purposes. Thus, to overcome this problem of storage while transmitting these images, a process is used, namely, compression. The paper focuses on a compression technique known as Block Truncation Coding (BTC) as it helps in reducing the size of the image so that it takes less space in memory and easy to transmit. Thus, BTC is used to compress grayscale images. After compression, Discrete Wavelet Transform (DWT) with spline interpolation is applied to reconstruct the images. The process is suggested in order to view the changed pixels of images after compression of two images. The wavelets and interpolations provide enhanced compressed images that follow the steps for its encoding and decoding processes. The performance of the proposed method is measured by calculating the PSNR values and on comparing the proposed technique with the existing ones, it has been discovered that the proposed method outperforms the most common existing techniques and provides 49% better results in comparison with existing techniques.[...] Read more.
A sampled signal can be properly reconstructed if the sampling rate follows the Nyquist criteria. If Nyquist criteria is imposed on various image and video processing applications, a large number of samples are produced. Hence, storage, processing and transmission of these huge amounts of data make this task impractical. As an alternate, Compressed Sensing (CS) concept was applied to reduce the sampling rate. Compressed sensing method explores signal sparsity and hence the signal acquisition process in the area of transformation can be carried out below the Nyquist rate. As per CS theory, signal can be represented by alternative non-adaptive linear projections, which preserve the signal structure and the reconstruction of the signal can be achieved using optimization process. Hence signals can be reconstructed from severely undersampled measurements by taking advantage of their inherent low-dimensional structure. As Compressed Sensing, requires a lower sampling rate for reconstruction, data captured within the specified time will be obviously less than the traditional method.
In this paper, three Compressed Sensing algorithms, namely Orthogonal Matching Pursuit (OMP), Compressive Sampling Matching Pursuit (CoSaMP) and Normalized Iterative Hard Thresholding (NIHT) are reviewed and their performance is evaluated at different sparsity levels for image reconstruction.
Automatic signature extraction from document image and retrieval has a large number of applications such as in business offices, organizations, institutes and digital libraries. Hence it has attracted a lot of researchers from the field of document image analysis and processing. This paper proposes a method for automatic signature extraction and signature based document image retrieval using multi-level discrete wavelet transform features. Since the distance measures play a vital role in pattern analysis, classification and clustering, in this paper we also compared the results of retrieval using 7 distance metrics such as Euclidean, Canberra, City-block, Chebychev, Cosine, Hamming and Jaccard. Results obtained in this paper shows that city-block distance with multi-level DWT features outperforms the other 6 distance metrics used for comparison.[...] Read more.
In this paper, a robust voice activity detection algorithm based on a long-term metric using dominant frequency and spectral flatness measure is proposed. The propose algorithm makes use of the discriminating power of both features to derive the decision rule. This method reduces the average number of speech detection errors. We evaluate its performance using 15 additive noises at different SNRs (-10 dB to 10 dB) and compared with some of the most recent standard algorithms. Experiments show that our propose algorithm achieves the best performance in terms of accuracy rate average over all SNRs and noises.[...] Read more.
Object tracking has always been a hotspot in the field of computer vision and has myriad applications in the real world. A major problem in this field is that of the successful tracking of a moving object undergoing occlusion in its path. This paper presents centroid based tracking scheme of a moving object without any apriori information of its shape or motion. Once the boundary of the object of interest is obtained, the centroid is calculated from its first order moments. This centroid is further utilized to detect the partial occlusion of test object by some other still or moving object in image frame. In case occlusion is detected, the new centroid location of moving object is predicted for subsequent video frames. The proposed algorithm is able to successfully detect moving object undergoing partial or total occlusion. Experimental results of our algorithm are compared with a popular tracking technique based on Mean Shift tracking algorithm.[...] Read more.