Chandra Mohan Bhuma

Work place: Bapatla Engineering College, Department of ECE, Bapatla, Andhra Pradesh, India

E-mail: chandrabhuma@gmail.com

Website: https://orcid.org/0000-0002-7566-4739

Research Interests: Computational Learning Theory

Biography

Chandra Mohan Bhuma received his B. Tech in Electronics and Communication Engineering (ECE), M.Tech in Microwave & Radar and Doctoral Degree in Image Watermarking JNTU, Hyderabad. He is currently working as a Professor in the Bapatla Engineering College. His research interest includes applications of machine learning and Advanced Deep Learning.

Author Articles
A Novel Technique for Image Retrieval based on Concatenated Features Extracted from Big Dataset Pre-Trained CNNs

By Chandra Mohan Bhuma Ramanjaneyulu Kongara

DOI: https://doi.org/10.5815/ijigsp.2023.02.01, Pub. Date: 8 Apr. 2023

Accessing semantically relevant data from a database is not only essential in commercial applications but also in medical imaging diagnosis. Representation of the query image by its features and subsequently the dataset are the key factors in Content Based Image Retrieval (CBIR). Texture, shape and color are commonly used features for this purpose. Features extracted from the pre-trained Convolutional Neural Networks (CNNs) are used to improve the performance of CBIR methods. In this work, we explore a recent state of the art big dataset pre-trained CNNs which are known as Big Transfer Networks. Features extracted from Big Transfer Network have higher discriminative power compared to features of many other pre-trained CNNs. The idea behind the proposed work is to demonstrate the effectiveness of using features of big transfer networks for image retrieval. Further, features extracted from big transfer networks are concatenated to improve the performance of the proposed method. Feature diversity supplemented with network diversity should ensure good discriminative power for image retrieval. This idea is supported by performing simulations on four datasets with varying sizes in terms of number of images and classes. As feature size increases with the concatenation, we applied a dimensionality reduction algorithm i.e., Principal Component Analysis. Several distance metrics are explored in this work. By properly choosing the pre-trained CNNs and distance metric, it is possible to achieve higher mean average precisions. ImageNet-21K pre-trained CNN and Instagram pre-trained CNN are chosen in this work. Further, a pre-trained network trained on Imagenet-21K dataset is superior compared to the networks trained on ImageNet-1K dataset as there are more classes and presence of wide variety of images. This is demonstrated by applying our algorithm on four datasets i.e., COREL-100, CALTECH-101, FLOWER-17 and COIL-100. Simulations are presented for various precisions (scopes), and distance metrics. Results are compared with the existing algorithms and superiority of the proposed method in terms of mean Average Precision is shown.

[...] Read more.
Other Articles