Work place: Department of Computer Science and Engineering, National Institute of Technology Calicut, Kerala-673601, India
Research Interests: Data Compression, Medical Image Computing, Image Compression, Image Manipulation, Distributed Computing, Image Processing
Dr. V. K. Govindan is currently serving as Professor of Computer Science and Engineering Department and Dean Academic, National Institute of Technology, Calicut, India. He received Bachelor’s and Master’s degrees in Electrical Engineering from National Institute of Technology (the erstwhile Regional Engineering College), Calicut in the year 1975 and 1978, respectively. He was awarded PhD in Character Recognition from the Indian Institute of Science, Bangalore, in 1989. He has over 32 years of teaching experience in the capacity of Lecturer (1979-87), Asst. professor (1987-98) and Professor (1998 onwards). He was Head of the Department of Computer Science and Engineering during January 2000 to August 2005. His research areas of interest include medical imaging, agent technology, biometrics based authentication, data compression, and distributed computing. He has over 85 research publications in various international journals and conferences, and authored several books on Operating systems and Computer basics. He has reviewed papers for many conferences and journals including IEEE Transactions and evaluated several PhD theses.
DOI: https://doi.org/10.5815/ijigsp.2017.05.06, Pub. Date: 8 May 2017
Accurate recognition and tracking of human faces are indispensable in applications like Face Recognition, Forensics, etc. The need for enhancing the low resolution faces for such applications has gathered more attention in the past few years. To recognize the faces from the surveillance video footage, the images need to be in a significantly recognizable size. Image Super-Resolution (SR) algorithms aid in enlarging or super-resolving the captured low-resolution image into a high-resolution frame. It thereby improves the visual quality of the image for recognition. This paper discusses some of the recent methodologies in face super-resolution (FSR) along with an analysis of its performance on some benchmark databases. Learning based methods are by far the immensely used technique. Sparse representation techniques, Neighborhood-Embedding techniques, and Bayesian learning techniques are all different approaches to learning based methods. The review here demonstrates that, in general, learning based techniques provides better accuracy/ performance even though the computational requirements are high. It is observed that Neighbor Embedding provides better performances among the learning based techniques. The focus of future research on learning based techniques, such as Neighbor Embedding with Sparse representation techniques, may lead to approaches with reduced complexity and better performance.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2016.12.05, Pub. Date: 8 Dec. 2016
Shadows are physical phenomena that appear on a surface when direct light from a source is unable to reach the surface due to the presence of an object between the source and the surface. The formation of shadows and their various features has evolved as a topic of discussion among researchers. Though the presence of shadows can aid us in understanding the scene model, it might impair the performance of applications such as object detection. Hence, the removal of shadows from videos and images is required for the faultless working of certain image processing tasks. This paper presents a survey of notable shadow removal techniques for single image available in the literature. For the purpose of the survey, the various shadow removal algorithms are classified under five categories, namely, reintegration methods, relighting methods, patch-based methods, color transfer methods, and interactive methods. Comparative study of qualitative and quantitative performances of these works is also included. The pros and cons of various approaches are highlighted. The survey concludes with the following observations- (i) shadow removal should be performed in real time since it is usually considered as a preprocessing task, (ii) the texture and color information of the regions underlying the shadow must be recovered, (iii) there should be no hard transition between shadow and non-shadow regions after removing the shadows.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2015.11.08, Pub. Date: 8 Oct. 2015
Ophthalmology is the study of structures, functions, treatment and disorders of eye. Computer aided analysis of retina images is still an open research area. Numerous efforts have been made to automate the analysis of retina images. This paper presents a review of various existing research in detection of anatomical structures in retina and lesions for the diagnosis of diabetic retinopathy (DR). The research in detection of anatomical structures is further divided into subcategories, namely, vessel segmentation and vessel centerline extraction, optic disc segmentation and localization, and fovea/ macula detection and extraction. Various research works in each of the categories are reviewed highlighting the techniques employed and comparing the performance figures obtained. The issues/ lacuna of various approaches are brought out. The following major observations are made: Most of the vessel detection algorithms fail to extract small thin vessels having low contrast. It is difficult to detect vessels at regions where close vessels are merged, at regions of missing of small vessels, at optic disc regions, and at regions of pathology. Machine learning based approaches for blood vessel tracing requires long processing time. It is difficult to detect optic disc radius or boundary with simple blood vessel tracing. Automatic detection of fovea and macular region extraction becomes complicated due to non-uniform illuminations while imaging and diseases of the eyes. Techniques requiring prior knowledge leads to complexity. Most lesion detection algorithms underperform due to wide variations in the color of fundus images arising out of variations in the degree of pigmentation and presence of choroid.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2015.07.06, Pub. Date: 8 Jun. 2015
Image reconstruction is the process of generating an image of an object from the signals captured by the scanning machine. Medical imaging is an interdisciplinary field combining physics, biology, mathematics and computational sciences. This paper provides a complete overview of image reconstruction process in MRI (Magnetic Resonance Imaging). It reviews the computational aspect of medical image reconstruction. MRI is one of the commonly used medical imaging techniques. The data collected by MRI scanner for image reconstruction is called the k-space data. For reconstructing an image from k-space data, there are various algorithms such as Homodyne algorithm, Zero Filling method, Dictionary Learning, and Projections onto Convex Set method. All the characteristics of k-space data and MRI data collection technique are reviewed in detail. The algorithms used for image reconstruction discussed in detail along with their pros and cons. Various modern magnetic resonance imaging techniques like functional MRI, diffusion MRI have also been introduced. The concepts of classical techniques like Expectation Maximization, Sensitive Encoding, Level Set Method, and the recent techniques such as Alternating Minimization, Signal Modeling, and Sphere Shaped Support Vector Machine are also reviewed. It is observed that most of these techniques enhance the gradient encoding and reduce the scanning time. Classical algorithms provide undesirable blurring effect when the degree of phase variation is high in partial k-space. Modern reconstructions algorithms such as Dictionary learning works well even with high phase variation as these are iterative procedures.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2012.05.01, Pub. Date: 8 Jun. 2012
Image segmentation plays a crucial role in effective understanding of digital images. Past few decades saw hundreds of research contributions in this field. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. This paper critically reviews existing important graph based segmentation methods. The review is done based on the classification of various segmentation algorithms within the framework of graph based approaches. The major four categorizations we have employed for the purpose of review are: graph cut based methods, interactive methods, minimum spanning tree based methods and pyramid based methods. This review not only reveals the pros in each method and category but also explores its limitations. In addition, the review highlights the need for creating a database for benchmarking intensity based algorithms, and the need for further research in graph based segmentation for automated real time applications.[...] Read more.
Subscribe to receive issue release notifications and newsletters from MECS Press journals