International Journal of Image, Graphics and Signal Processing (IJIGSP)

IJIGSP Vol. 13, No. 1, Feb. 2021

Cover page and Table of Contents: PDF (size: 659KB)

Table Of Contents

REGULAR PAPERS

Central Moment and Multinomial Based Sub Image Clipped Histogram Equalization for Image Enhancement

By Kuldip Acharya Dibyendu Ghoshal

DOI: https://doi.org/10.5815/ijigsp.2021.01.01, Pub. Date: 8 Feb. 2021

The visual appearance of a digital image can be improved through image enhancement algorithm by reducing the noise in an image, improving the color, brightness and contrast of an image for more analysis. This paper introduces an image enhancement algorithm. The image histogram is processed through multinomial curvature fitting function to reduces the number of pixels for each intensity value through minimizing the sum of squared residuals. Then resampling is done to smooth out the computed data. After then histogram clipping threshold is computed by central moment processed on the resampled data value to restrict the over enhancement rate. Histogram is equally divided into two sub histograms. The sub histograms are equalized by transfer function to merged the sub images into one output image. The output image is further improved by reducing the environmental haze effect by applying Matlab imreducehaze method, which gives the final output image. Matlab simulation results demonstrate that the proposed method outperforms than other compared methods in terms of both quantitative and qualitative performance evaluation applied on colorfulness based PCQI (C-PCQI), and blind image quality measure of enhanced images (BIQME) image quality metrics.

[...] Read more.
A New Framework for Video-based Frequent Iris Movement Analysis towards Anomaly Observer Detection

By Md. Minhaz Ur Rahman Mahmudul Hasan Robin Abu Mohammad Taief

DOI: https://doi.org/10.5815/ijigsp.2021.01.02, Pub. Date: 8 Feb. 2021

This paper suggested a new framework for detecting abnormal behavior, specifically based on frequent iris movements. It contributed to a decision whereas an individual is dubious or unsuspected from a video. One of the key components of questionable observer detection is to detect some specific suspicious activity. According to the writer, various areas of the body movement and human behaviors may be an indicator of suspicious behavior. In this research, we considered the movement of human eyes to identify suspicious activity. This working field is also a significant aspect of machine vision and artificial intelligence, and a big part of the understanding of human behavior. The system framework comprises three parts to monitor suspicious video activities. First, we used the Multi-task Cascaded Convolutional Networks (MTCNN) classifier to detect eyes. Second, we observe irises from eye representations with the use of Circular Hough Transformation (CHT). Finally, we calculated the average distance of iris movement from eye images using a new morphological method called TRM using some properties of the iris movement. We have observed a particular phenomenon of frequent iris movement. Hence, we are making a case of someone being an abnormal person and referring it to a suspicious observer. To vouch for our work, we created our data set with 100 videos where 30 individuals volunteered to validate this research. Each video comprises 200 frames with a duration of 6-10 seconds. We’ve reached an accuracy of 94% on detecting a frequent iris movement. Rather the goal is to minimize people’s burdens so they can focus on a small range of cases for investigation in more depth. This research’s sole purpose is to indicate a person’s anomalous behavior on the basis of frequent iris movement. Our research outstrips much of the current literature on abnormal iris movement and dubious investigator identification.

[...] Read more.
Fluid Temperature Detection Based on its Sound with a Deep Learning Approach

By Arshia Foroozan Yazdani Ali Bozorgi Mehr Iman Showkatyan Amin Hashemi Mohsen Kakavand

DOI: https://doi.org/10.5815/ijigsp.2021.01.03, Pub. Date: 8 Feb. 2021

The present study, the main idea of which was based on one of the questions of I.P.T.2018 competition, aimed to develop a high-precision relationship between the fluid temperature and the sound produced when colliding with different surfaces, by creating a data collection tool. In fact, this paper was provided based on a traditional phenomenological project using the well-known deep neural networks, in order to achieve an acceptable accuracy in this project. In order to improve the quality of the paper, the data were analyzed in two ways:
I. Using the images of data spectrogram and the known V.G.G.16 network.
II. Applying the data audio signal and a convolutional neural network (C.N.N.).
Finally, both methods have obtained an acceptable precision above 85%.

[...] Read more.
Pedestrian Detection in Thermal Images Using Deep Saliency Map and Instance Segmentation

By A. K. M. Fahim Rahman Mostofa Rakib Raihan S.M. Mohidul Islam

DOI: https://doi.org/10.5815/ijigsp.2021.01.04, Pub. Date: 8 Feb. 2021

Pedestrian detection is an established instance of computer vision task. Pedestrian detection from the color images has achieved robust performance but in the night time or in bad light conditions it has low detection accuracy. Thermal images are used for detecting people at night time, foggy weather or in bad lighting situations when color images have a lower vision. But in the daytime where the surroundings are warm or warmer than pedestrians then the thermal image has lower accuracy. Hence thermal and color image pair can be a solution but it is expensive to capture color-thermal pair and misaligned imagery can cause low detection accuracy. We proposed a network that achieved better accuracy by extending the prior works which introduced the use of the saliency map in pedestrian detection tasks from the thermal images into instance-level segmentation. We worked on a subdivision of KAIST Multispectral Pedestrian Detection Dataset [8] which has pixel-level annotations. We have trained Mask-RCNN for pedestrian detection task and report the added effect of saliency maps generated using PiCA-Net. We have achieved an accuracy of 88.14% over day and 91.84% over night images. . So, our model has reduced the miss rate by 24.1% and 23% over the existing state-of-the-art method in day and night images.

[...] Read more.
Emotion Recognition from Faces Using Effective Features Extraction Method

By Htwe Pa Pa Win Phyo Thu Thu Khine Zon Nyein Nway

DOI: https://doi.org/10.5815/ijigsp.2021.01.05, Pub. Date: 8 Feb. 2021

With the rapid development and requirement of application with Artificial Intelligent (AI) technologies, the researches related to human-computer interaction are always active and the emotional status of the users is very essential for most of the environment. Facial Emotion Recognition, FER is one of the important visual information providers for the AI systems. This paper proposes a FER system using an effective feature extraction methodology and classification technologies. Local features of the face are more effective features for recognition and Scale Invariant Feature Transform, SIFT can give a better representation of the face. The bag of the visual word (BOVW) is the good encoding method and the advancement of that model Vector of Locally Aggregate Descriptor, VLAD provides the better encoder for SIFT features and used these benefits for feature extraction environments. The power of SVM includes unknown class recognition problems and this advantage is used for classification. This system used the standard basement JAFEE dataset to measure the success of the proposed methods and prepared to compare with other systems. The proposed system achieves the better result when it compared with some of the other previous systems because of the combination of effective feature extraction and encoding method.

[...] Read more.