IJIGSP Vol. 7, No. 9, Aug. 2015
Cover page and Table of Contents: PDF (size: 649KB)
Texture is one of the most significant characteristics for retrieving visually similar patterns in remote sensing images. Traditional approaches for texture analysis are based on symbolic descriptions and statistical methods. This study proposes a new method to extract and classify texture patterns from multispectral Landsat TM satellite images using optimized clustering and probabilistic inference. After the images are preprocessed with Principal Component Analysis and decomposed into regions of interest, Gabor wavelets are computed for each region in the first component image to obtain texture feature vectors. An adapted k-means clustering algorithm with optimized number of clusters and initial starting centers generates training and testing data for Bayes Point Machine classifiers. The classifiers may run in the online mode for binary classification and the batch mode for multi-class classification. The experimental results show the effectiveness of the proposed classification method and its potentials in other image texture pattern recognition applications.[...] Read more.
In this paper, we present computer vision based technique to detect surface defects of citrus fruits. The method begins with background removal using k-means clustering technique. Mean shift segmentation is used for fruit region segmentation. The candidate defects are detected using threshold based segmentation. In this stage, it is very difficult to differentiate stem-end from actual defects due to similarity in appearance. Therefore, we proposed a novel technique to differentiate stem-end from actual defects based on the shape features. We conducted experiments on our citrus data set captured in controlled environment. The experiment results demonstrate that our technique outperforms the existing techniques.[...] Read more.
All the works on writer's handwritten letters detection was based on the western languages and partly Chinese and Hindi, and there is a little study on Persian handwritten letters detection. Accordingly, in this paper a method is proposed to distinguish scanned Persian handwritten texts with image processing techniques. The proposed method assumes that the writer's handwritten are available in separate letters. The system trains with features extraction of these separate letters and then the trained system is used to detect individual handwritten among some indistinctive handwritten texts. The characteristics of our proposed method including high-speed of training in too much number of handwritten, is the content inattention and visual features considering. The results of procedures on 100 persons also admitted that the proposed method has a very good performance on Persian handwritten texts detection.[...] Read more.
This paper introduces an algorithm to address the problem of finding the longest common substring between two documents. This problem is known as the longest common substring (LCS) problem. The proposed algorithm is based on the convolution between the two sequences (named major sequence (X) which is represented as array and the minor one (Y) which is represented as circular linked list. An array of linked lists is established and a new node is created for each match between two substrings. If two or more matches in different locations in string Y share the same location in string X, the corresponding nodes will construct a unique linked-list. Accordingly, by the end of processing, we obtain a group of linked-lists containing nodes arranged in certain manner representing all possible matches between sequences X and Y. The algorithm has been implemented and tested in C# language under Windows platform. The obtained results presented a very good speedups and indicated that impressive improvements had been achieved.[...] Read more.
Clinically, ECG is a potential tool used in the applications evaluating electrical activities of heart, functioning and providing solution to problems associated with it. Among such, the relationship (correlation) between respiration and electrocardiographic signal has attracted attention in past decades. In this research, a Welch spectrum estimation approach was utilized for normalizing cross spectrum analysis between these two signals. This approach can be useful while diagnosing diseases like pulmonary embolism, coronary lung diseases, Deep vein thrombosis and other diseases related to heart, from the knowledge of existing coherence bonding between these signals. This research applies the above approach to human subjects, whose ECG and respiratory signal annotations has been evaluated and were sampled at 100 samples/ second (sampling rate). The different respiratory signals are taken from Chest (CRSP), abdominal (ARSP) and oronasal regions (NRSP). The annotated signals for all the four subjects, discussed in this paper were obtained through a non-invasive test, which is medically well known as impedance phlebography, or impedance plethysmography. The numbers of samples, under the analysis were 6000 for each signal. The data was acquired from recording database of physionet. For this examination the mean square coherence (MSC) was chosen as an excellent candidate. The results imply that the mean of MSCs is found continuously decreasing in chest respiration. Secondly, the results showed maximum coherence between ECG and corresponding respiratory signal in three subjects is in Abdominal (ARSP) region (i.e. having maximum value greater than 0.5). Lastly, above analysis was analyzed over the fourth subject's data and under observation it was found, exceptionally that, the value of coherence for all respiratory patterns showed a poor functional association or simply coherence between the signal i.e.Coh2 below 0.5 in the abdominal region (ref.Fig.5) and the reason suggested could be chronic lung disease while the results show higher values, that is between (0.5 < coherence <1) in other two. Further, we show that the coherence peak reflects that the one physiological signal is synchronized with another signal of same nature at a particular frequency, here it is 0 to 35 Hz frequency band and combined analysis is shown through a Boxplot, from three regions showing maximum value of coherence upper quartile in abdominal region for three healthy subjects with maximum value of peak in the same region. This paper also presents a platform to dissolve the problem pertaining in an individual related to deep vein thrombosis, hypoxemia (blood level <90%)  and related diseases by estimating the coupling associated between saturated oxygen content (SO2) with respiratory patterns, in order to detect dysfunctioning clinically, also for efficient heart working. Thus, the research shows successful attempt to investigate the interaction of the PS of ECG signal and respiratory signals. The work presented in this paper can further be extended by adopting different method and either by defining a vector array element for maximum number of coherence value that could be beneficial for detecting diseases like sleep apnea on basis of minimum or maximum occurrence of peaks.[...] Read more.
In the past few decades, face recognition has been a widely researched topic, since it is a robust means of authentication. Extraction of features from the face images during face recognition is a very challenging task. Hence, proper selection of appropriate feature extraction algorithms is vital in this regard. Many robust feature extraction techniques do exist. But their proper selection and combination also plays an utmost role. In this study, 2D face recognition was achieved using the combination of local binary pattern (LBP), principal component analysis (PCA) and Support Vector Machines (SVM). Along with retaining most of the information, PCA is used to reduce multidimensional data to lower dimensions. LBP was mainly used to tackle the problems arising due to expressions. As the facial expression changes, the effect gets prevalent on the rest of the organs of the face. Similarly, the intensity of the corresponding pixels of images also changes. Hence, this study aims to overcome these challenges by applying PCA and LBP algorithms on face images to increase the recognition rate. SVM was used to perform classification on these datasets. This hybrid approach of using LBP and PCA in conjunction increased the recognition rate (RR) and decreased the false match rate. Therefore, this method was found to be more suitable for real-time applications.[...] Read more.
In this paper, we propose a model for recognition of sign language being used by communication impaired people in the society. A novel method of extracting features from a video sequence of signs is proposed. Key frames are selected from a given video shots of signs to reduce the computational complexity yet retaining the significant information for recognition. A set of features is extracted from each key frame to capture the trajectory of hand movements made by the signer. The same sign made by different signers and by the same signers at different instances may have variations. The concept of symbolic data particularly interval type data is used to capture such variations and to efficiently represent signs in the knowledgebase. A suitable similarity measure is explored for the purpose of matching and recognition of signs. A database of signs made by communication impaired people of Mysore region is created and extensive experiments are conducted on this database to demonstrate the performance of the proposed approach.[...] Read more.
Salt and pepper noise is a type of impulse noise, where certain amount of black and white dots appear in the image. The intensity is accumulated in 8 bit integer, giving 256 possible gray levels in the range (0 – 255).In this range salt and pepper noise takes either minimum or maximum intensity. Positive impulse appears as white (salt) points with intensity '255' and negative impulse appears as black (pepper) points with intensity '0' respectively. Salt and pepper noise removal is not an easy task mostly when noise density in the contaminated image is high and restoration of image quality is essential. Different filters like MF, SMF, AMF, PSMF, DBA, DBUTMF, and MDBUTMF and so on are noticed useful for taking away low, moderate and high density salt and pepper noise. The purpose of this paper is to present these filters first and then revise their art to enhance their performances and usefulness. The comparison shows that some of these filters are very fruitful in some particular noise density levels and hence classified applications on these situations are recommended based on the output of investigations.[...] Read more.