International Journal of Intelligent Systems and Applications (IJISA)

IJISA Vol. 7, No. 11, Oct. 2015

Cover page and Table of Contents: PDF (size: 190KB)

Table Of Contents

REGULAR PAPERS

Scalar Diagnostics of the Inertial Measurement Unit

By Vadim V. Avrutov

DOI: https://doi.org/10.5815/ijisa.2015.11.01, Pub. Date: 8 Oct. 2015

The scalar method of fault diagnosis systems of the inertial measurement unit (IMU) is described. All inertial navigation systems consist of such IMU. The scalar calibration method is a base of the scalar method for quality monitoring and diagnostics. Algorithms of fault diagnosis systems are developed in accordance with scalar calibration method. Algorithm verification is implemented in result of quality monitoring of IMU. A failure element determination is based in diagnostics algorithm verification and after that the reason of such failure is cleared. The process of verifications consists of comparison of the calculated estimations of biases, scale factor errors and misalignments angles of sensors to their data sheet certificate, which kept in internal memory of navigation computer. In result of such comparison the conclusion for working capacity of each one IMU sensor can be made and also the failure sensor can be deter-mined.

[...] Read more.
The Method of Measuring the Integration Degree of Countries on the Basis of International Relations

By Rasim M. Alguliyev Ramiz M. Aliguliyev Gulnara Ch. Nabibayova

DOI: https://doi.org/10.5815/ijisa.2015.11.02, Pub. Date: 8 Oct. 2015

The paper studies the concept of integration, the integration of countries, basic characteristics of the integration of countries, the integration indicators of countries. The number of contacts between countries and the number of contracts signed between countries are offered as the indicators to determine the integration degree of countries. An approach to the design of the data warehouse for the decision support system in the field of foreign policy, using OLAP-technology is offered. Designed polycubic OLAP-model in which each cube is based on a separate data mart. Given the differences between the data warehouse and data mart. Shown that, one of the cubes of this model gives full information about the chosen indicators, including their aggregation on various parameters. Method for measuring the degree of integration of the countries, based on the calculation of the weight coefficients is proposed. In this regard, was described the information model of the relevant subsystem by using graph theory. Practical application of this method was shown. Moreover, the used software was shown.

[...] Read more.
Assessing Different Crossover Operators for Travelling Salesman Problem

By Imtiaz Hussain Khan

DOI: https://doi.org/10.5815/ijisa.2015.11.03, Pub. Date: 8 Oct. 2015

Many crossover operators have been proposed in literature on evolutionary algorithms, however, it is still unclear which crossover operator works best for a given optimization problem. In this study, eight different crossover operators specially designed for travelling salesman problem, namely, Two-Point Crossover, Partially Mapped Crossover, Cycle Crossover, Shuffle Crossover, Edge Recombination Crossover, Uniform Order-based Crossover, Sub-tour Exchange Crossover, and Sequential Constructive Crossover are evaluated empirically. The select crossover operators were implemented to build an experimental setup upon which simulations were run. Four benchmark instances of travelling salesman problem, two symmetric (ST70 and TSP225) and two asymmetric (FTV100 and FTV170), were used to thoroughly assess the select crossover operators. The performance of these operators was analyzed in terms of solution quality and computational cost. It was found that Sequential Constructive Crossover outperformed other operators in attaining 'good' quality solution, whereas Two-Point Crossover outperformed other operators in terms of computational cost. It was also observed that the performance of different crossover operators is much better for relatively small number of cities, both in terms of solution quality and computational cost, however, for relatively large number of cities their performance greatly degrades.

[...] Read more.
A Solution for Android Device Misplacement through Bluetooth-enabled Technology

By Kaven Raj SO Manoharan Siew-Chin Chong Kuok-Kwee Wee

DOI: https://doi.org/10.5815/ijisa.2015.11.04, Pub. Date: 8 Oct. 2015

The number of smartphone users and mobile application offerings are growing rapidly nowadays. A mobile device is currently considered as the most powerful and most needed device of this modern century. Every day new mobile applications are developed with their own compatibility, making sure to serve correctly to a particular smartphone model and its specifications. The goal of this project is to develop a self-help Android application namely “Dont Forget Me”, which is user friendly and well defined to solve the problem of misplaced or lost smartphone devices. This missing phone prevention alert application practically pairs with another device using Bluetooth connection. The Bluetooth connection is established in order to prevent the user from forgetting to bring along the device or being stolen by someone. If the Bluetooth connection between the paired devices is disconnected hence an alarm and message notification are triggered to notify the user that the device was not brought along with them. A website application is also developed purposely in serving the user to track, locate and lock the missing device.

[...] Read more.
A Framework for Mining Coherent Patterns Using Particle Swarm Optimization based Biclustering

By Suvendu Kanungo Somya Jaiswal

DOI: https://doi.org/10.5815/ijisa.2015.11.05, Pub. Date: 8 Oct. 2015

High-throughput microarray technologies have enabled development of robust biclustering algorithms which are capable of discovering relevant local patterns in gene expression datasets wherein subset of genes shows coherent expression patterns under subset of experimental conditions. In this work, we have proposed an algorithm that combines biclustering technique with Particle Swarm Optimization (PSO) structure in order to extract significant biological relevant patterns from such dataset. This algorithm comprises of two phases for extracting biclusters, one is the seed finding phase and another is the seed growing phase. In the seed finding phase, gene clustering and condition clustering is done separately on the gene expression data matrix and the result obtained from both the clustering is combined to form small tightly bound submatrices and those submatrices are used as seeds for the algorithm, which are having the Mean Squared Residue (MSR) value less than the defined threshold value. In the seed growing phase, the number of genes and the number of conditions are added in these seeds to enlarge it by using the PSO structure. It is observed that by using our technique in Yeast Saccharomyces Cerevisiae cell cycle expression dataset, significant biclusters are obtained which are having large volume and less MSR value in comparison to other biclustering algorithms.

[...] Read more.
An Efficient Algorithm for Mining Weighted Frequent Itemsets Using Adaptive Weights

By Hung Long Nguyen

DOI: https://doi.org/10.5815/ijisa.2015.11.06, Pub. Date: 8 Oct. 2015

Weighted frequent itemset mining is more practical than traditional frequent itemset mining, because it can consider different semantic significance (weight) of items. Many models and algorithms for mining weighted frequent itemsets have been proposed. These models assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of the items may vary with time. Therefore, reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. Recently, Chowdhury F. A. et al. have introduced a novel concept of adaptive weight for each item and propose an algorithm AWFPM (Adaptive Weighted Frequent Pattern Mining). AWFPM can handle the situation where the weight (price or significance) of an item may vary with time. In this paper, we present an improved algorithm named AWFIMiner. Experimental computations show that our AWFIMiner is more efficient and scalable for mining weighted frequent itemsets using adaptive weights. Moreover, because it only requires one single database scan, the AWFIMiner is applicable for mining these itemsets on data streams.

[...] Read more.
Differential Evolution Algorithm with Space Partitioning for Large-Scale Optimization Problems

By Ahmed Fouad Ali Nashwa Nageh Ahmed

DOI: https://doi.org/10.5815/ijisa.2015.11.07, Pub. Date: 8 Oct. 2015

Differential evolution algorithm (DE) constitutes one of the most applied meta-heuristics algorithm for solving global optimization problems. However, the contributions of applying DE for large-scale global optimization problems are still limited compared with those problems for low and middle dimensions. DE suffers from slow convergence and stagnation, specifically when it applies to solve global optimization problems with high dimensions. In this paper, we propose a new differential evolution algorithm to solve large-scale optimization problems. The proposed algorithm is called differential evolution with space partitioning (DESP). In DESP algorithm, the search variables are divided into small groups of partitions. Each partition contains a certain number of variables and this partition is manipulated as a subspace in the search process. Selecting different subspaces in consequent iterations maintains the search diversity. Moreover, searching a limited number of variables in each partition prevents the DESP algorithm from wandering in the search space especially in large-scale spaces. The proposed algorithm is tested on 15 large- scale benchmark functions and the obtained results are compared against the results of three variants DE algorithms. The results show that the proposed algorithm is a promising algorithm and can obtain the optimal or near optimal solutions in a reasonable time.

[...] Read more.
Finding Representative Test Case for Test Case Reduction in Regression Testing

By Sudhir Kumar Mohapatra Srinivas Prasad

DOI: https://doi.org/10.5815/ijisa.2015.11.08, Pub. Date: 8 Oct. 2015

Software testing is one of the important stages of software development. In software development, developers always depend on testing to reveal bugs. In the maintenance stage test suite size grow because of integration of new technique. An addition of new technique force to create new test case which increase the size of test suite. In regression testing new test case may be added to the test suite during the whole testing process. These additions of test cases create possibility of presence of redundant test cases. Due to limitation of time and resource, reduction techniques should be used to identify and remove them. Research shows that a subset of the test case in a suit may still satisfy all the test objectives which is called as representative set. Redundant test case increase the execution cost of the test suite, in spite of NP-completeness of the problem there are few good reduction techniques have been available. In this paper a new approach for test case reduction is proposed. This algorithm use genetic algorithm technique iteratively with varying chromosome length to reduce test case in a test suit by finding a representative set of test cases that are fulfill the testing criteria.

[...] Read more.