International Journal of Intelligent Systems and Applications (IJISA)

IJISA Vol. 11, No. 5, May. 2019

Cover page and Table of Contents: PDF (size: 188KB)

Table Of Contents

REGULAR PAPERS

Optimization of Secondary Surveillance Radar Data Processing

By Oleksii O. Strelnytskyi Iryna V. Svyd Ivan I. Obod Oleksandr S. Maltsev Ganna E. Zavolodko

DOI: https://doi.org/10.5815/ijisa.2019.05.01, Pub. Date: 8 May 2019

Secondary surveillance radar (SSR) performs one of the main functions of information service for consumers of the airspace control system. To improve the quality the SSR information is processed using modern information technology. The use of a consistent procedure for processing surveillance system data, due to the functionally completed stages of processing, made it possible to formalize the data processing procedure. However, this significantly limited, and in some cases excluded, the opportunities for inter-stage optimization of data processing. The SSR data processing structure synthesis and analysis are considered in this paper making it possible to perform a joint optimization of signal processing and primary processing of data, as well as to improve the quality of data processing.

[...] Read more.
Optimizing Parameters of Automatic Speech Segmentation into Syllable Units

By Riksa Meidy Karim Suyanto

DOI: https://doi.org/10.5815/ijisa.2019.05.02, Pub. Date: 8 May 2019

An automatic speech segmentation into syllable is an important task in a modern syllable-based speech recognition. It is generally developed using a time-domain energy-based feature and a static threshold to detect a syllable boundary. The main problem is the fixed threshold should be defined exhaustively to get a high generalized accuracy. In this paper, an optimization method is proposed to adaptively find the best threshold. It optimizes the parameters of syllable speech segmentation and exploits two post-processing methods: iterative-splitting and iterative-assimilation. The optimization is carried out using three independent genetic algorithms (GAs) for three processes: boundary detection, iterative-splitting, and iterative-assimilation. Testing to an Indonesian speech dataset of 110 utterances shows that the proposed iterative-splitting with optimum parameters reduce deletion errors more than the commonly used non-iterative-splitting. The optimized iterative-assimilation is capable of removing more insertions, without over-merging, than the common non-iterative-assimilation. The sequential combination of optimized iterative-splitting and optimized iterative-assimilation gives the highest accuracy with the lowest deletion and insertion errors.

[...] Read more.
Accelerating Training of Deep Neural Networks on GPU using CUDA

By D.T.V. Dharmajee Rao K.V. Ramana

DOI: https://doi.org/10.5815/ijisa.2019.05.03, Pub. Date: 8 May 2019

The development of fast and efficient training algorithms for Deep Neural Networks has been a subject of interest over the past few years because the biggest drawback of Deep Neural Networks is enormous cost in computation and large time is consumed to train the parameters of Deep Neural Networks. This aspect motivated several researchers to focus on recent advancements of hardware architectures and parallel programming models and paradigms for accelerating the training of Deep Neural Networks. We revisited the concepts and mechanisms of typical Deep Neural Network training algorithms such as Backpropagation Algorithm and Boltzmann Machine Algorithm and observed that the matrix multiplication constitutes major portion of the work-load for the Deep Neural Network training process because it is carried out for a huge number of times during the training of Deep Neural Networks. With the advent of many-core GPU technologies, a matrix multiplication can be done very efficiently in parallel and this helps a lot training a Deep Neural Network not consuming time as it used to be a few years ago. CUDA is one of the high performance parallel programming models to exploit the capabilities of modern many-core GPU systems. In this paper, we propose to modify Backpropagation Algorithm and Boltzmann Machine Algorithm with CUDA parallel matrix multiplication and test on many-core GPU system. Finally we discover that the planned strategies achieve very quick training of Deep Neural Networks than classic strategies.

[...] Read more.
Adaptive Finite-Time Convergence Fuzzy ARX-Laguerre System Estimation

By Farzin Piltan Shahnaz TayebiHaghighi Amirzubir Sahamijoo Hossein Rashidi Bod Somayeh Jowkar Jong-Myon Kim

DOI: https://doi.org/10.5815/ijisa.2019.05.04, Pub. Date: 8 May 2019

Convergence speed for system identification and estimation is a popular topic for determining the kinematics and dynamic identification/estimation of the parameters of robot manipulators. In this paper, adaptive fuzzy inverse dynamic system estimation is used to improve robust modeling, especially for a serial links robot manipulator. The Lyapunov technique is used to analyze the convergence rate of the tracking error and increase the accuracy response of the parameter estimation. Performance of robot estimation is conducted, and the results show fast convergence of the proposed finite time technique for a 6-DOF robot manipulator.

[...] Read more.
A Novel Text Representation Model to Categorize Text Documents using Convolution Neural Network

By M. B. Revanasiddappa B. S. Harish

DOI: https://doi.org/10.5815/ijisa.2019.05.05, Pub. Date: 8 May 2019

This paper presents a novel text representation model called Convolution Term Model (CTM) for effective text categorization. In the process of text categorization, representation plays a very primary role. The proposed CTM is based on Convolution Neural Network (CNN). The main advantage of proposed text representation model is that, it preserves semantic relationship and minimizes the feature extraction burden. In proposed model, initially convolution filter is applied on word embedding matrix. Since, the resultant CTM matrix is higher dimension, feature selection methods are applied to reduce the CTM feature space. Further, selected CTM features are fed into classifier to categorize text document. To discover the effectiveness of the proposed model, extensive experimentations are carried out on four standard benchmark datasets viz., 20-NewsGroups, Reuter-21758, Vehicle Wikipedia and 4 University datasets using five different classifiers. Accuracy is used to assess the performance of classifiers. The proposed model shows impressive results with all classifiers.

[...] Read more.
An Application-oriented Review of Deep Learning in Recommender Systems

By Jyoti Shokeen Chhavi Rana

DOI: https://doi.org/10.5815/ijisa.2019.05.06, Pub. Date: 8 May 2019

The development in technology has gifted huge set of alternatives. In the modern era, it is difficult to select relevant items and information from the large amount of available data. Recommender systems have been proved helpful in choosing relevant items. Several algorithms for recommender systems have been proposed in previous years. But recommender systems implementing these algorithms suffer from various challenges. Deep learning is proved successful in speech recognition, image processing and object detection. In recent years, deep learning has been also proved effective in handling information overload and recommending items. This paper gives a brief overview of various deep learning techniques and their implementation in recommender systems for various applications. The increasing research in recommender systems using deep learning proves the success of deep learning techniques over traditional methods of recommender systems.

[...] Read more.