IJIGSP Vol. 9, No. 2, Feb. 2017
Cover page and Table of Contents: PDF (size: 186KB)
Image processing techniques for object tracking, identification and classification have become common today as a result of improved quality of cameras as well as prices of cameras becoming cheaper and cheaper day by day. The use of cameras also make it possible for human analysis of video streams or images where it is difficult for robots or algorithms or machines to effectively deal with the images. However, the use of cameras for basic tracking and analysing do not come without challenges such as issues with sudden changes in illumination, shadows, occlusion, noise, and high computational time and space complexities of algorithms. A typical image processing task may involve several subtasks such as capturing, and pre-processing which demand high computational resources to complete. One of the main pre-processing tasks used in image processing is image segmentation which enables images to be divided into sections of interest in order to perform analysis on them. Background Subtraction is commonly used to segment images into Background and Foreground for further processing. Algorithms producing highly accurate results during this segmentation task normally demand high computation time or memory space, while algorithms that use smaller memory space and shorter time to complete this segmentation task may also suffer from limitations that may lead to undesired results at some point in time. Poor outputs from algorithms will eventually lead to system failure which must be avoided as much as possible. This paper proposes a median based background updating algorithm which determines the median of a buffer containing values that are highly correlated. The algorithm achieves this by deletingan extreme valuefrom the buffer whenever data is to be added to it.Experiments show that the method produces good results with less computational time which will make it possible to implement on devices that do not have much computation resources.[...] Read more.
Image segmentation has a crucial role in image processing. Classical segmentation techniques based on thresholding have been extensively used but they fail drastically for noisy or non-uniformly illuminated images. Several alternatives presented over the time have filled this void but with increased complexity. In this paper we present an algorithm to address the above issues with minimum complexity. We propose normalized self correlation function (NSCF) which forms a basis for the progress of the algorithm. We also introduce relative error function (REF) which is used for qualitative assessment of the algorithm and its comparison with other algorithms. We also propose a second algorithm named piecewise image segmentation (PIS) which is a generalized edge-based method able to generate any desired edge map. The results show that the proposed algorithms are able to perform well for different scenarios and at the same time better than traditional algorithms.[...] Read more.
This paper presents a method for fetal heart rate estimation from an abdominal electrocardiogram (ECG) signal based on adaptive filter analysis using least mean square (LMS) adaptive filtering algorithm in order to determine the health status of a baby in its mother's womb. The fetal ECG signal is extracted from abdominal ECG containing other sources of interference using the maternal ECG signal obtained from mother's chest cavity as the reference signal. Interference/noise model used for this work include the power-line noise, the white noise and the unwanted propagating maternal ECG signal. Thereafter, the heart rate is estimated using an automated peak voltage measurement algorithm at 75 percent threshold voltage. It is found that irrespective of the estimated heart rate of the baby, 100 percent estimation is achieved at signal-to-noise ratio (SNR) greater than or equal to -31dB.[...] Read more.
Estimating the visual quality of picture is a real challenge for various picture and video frame applications. The aim is to evaluate the quality of picture automatically in both subjective (human visual frame work) and objectively. The quality of picture is evaluated by comparing precision and closeness of a picture with reference or error free picture. The quality estimation can be done to achieve consistency in desired quality of picture with help of modeling remarkable physiological, psycho visual components framework and picture fidelity measure methods. In this article, the picture quality is evaluated by analyzing loss of picture information of the distortion system using differing noise models and examine the relationship between picture data, visual quality and error metric. The quality of picture & video frame assessment is really important that, every human can judge the visual quality of natural picture. The subjective quality of picture is assessed by using structural similarity metric, objective quality of picture is computed by root means squared error, mean squared error and peak signal to noise ratio and data content in picture is weighted through entropy.[...] Read more.
Hand Sign or gesture recognition is the way of communication for hearing and speech impaired people. Gestures are formed from motion of body or state but commonly initiate from the hand and face. Speech and gestures are the expressions; these are the communication medium between human beings. Hand gesture is movement or motion of human hand. Gesture recognition is mathematical interpretation of human hand by using computing devices. There are different sign languages used in all over world and have its own grammar structure. Even in India has different languages used in every state, sign languages has little difference in contra dictionary region. Hand sign recognition is used for robot control applications and sign language interpretation.[...] Read more.
The local binary pattern (LBP) and local ternary pattern (LTP) are basically gray scale invariant, and they encode the binary/ ternary relationship between the neighboring pixels and central pixel based on their grey level differences and derives a unique code. These traditional local patterns ignore the directional information. The proposed method encodes the relationship between the central pixel and two of its neighboring pixel located in different angles (α, β) with different directions. To estimate the directional patterns, the present paper derived variation in local direction patterns in between the two derivates of first order and derived a unique First order –Local Direction variation pattern (FO-LDVP) code. The FO-LDVP evaluated the possible direction variation pattern for central pixel by measuring the first order derivate relationship among the horizontal and vertical neighbors (0o Vs.90o; 90o Vs. 180o ; 180o Vs.270o ; 270o Vs. 0o) and derived a unique code. The performance of the proposed method is compared with LBP, LTP, LBPv, TS and CDTM using the benchmark texture databases viz. Brodtaz and MIT VisTex. The performance analysis shows the efficiency of the proposed method over the existing methods.[...] Read more.
This paper presents the design and implementation of a dedicated hardware (VLSI) architecture for real-time object tracking. In order to realize the complete system, the designed VLSI architecture has been integrated with different input/output video interfaces. These video interfaces along with the designed object tracking VLSI architecture have been coded using VHDL, simulated using ModelSim, and synthesized using Xilinx ISE tool chain. A working prototype of complete object tracking system has been implemented on Xilinx ML510 (Virtex-5 FX130T) FPGA board. The implemented system is capable of tracking the moving target object in real-time in PAL (720x576) resolution live video stream directly coming from the camera. Additionally, the implemented system also provides the real-time desired camera movement to follow the tracked object over a larger area.[...] Read more.