International Journal of Information Technology and Computer Science (IJITCS)

IJITCS Vol. 7, No. 4, Mar. 2015

Cover page and Table of Contents: PDF (size: 241KB)

Table Of Contents

REGULAR PAPERS

AQUAZONE: A Spatial Decision Support System for Aquatic Zone Management

By Sekhri A. Arezki Hamdadou B. Djamila Beldjilali C. Bouziane

DOI: https://doi.org/10.5815/ijitcs.2015.04.01, Pub. Date: 8 Mar. 2015

During the last years, the Sebkha Lake of Oran (Algeria) has been the subject of many studies for its protection and recovery. Many environmental and wetlands experts are a hope on the integration of this rich and fragile space, ecologically, as a pilot project in "management of water tides". Support the large of Sebkha (Lake) of Oran is a major concern for governments looking to make this a protected natural area and viable place. It was a question of putting in place a management policy to respond to the requirements of economic, agricultural and urban development and the preservation of this natural site through management of its water and the preservation of its quality.
The objective of this study is to design and develop a Spatial Decision Support System, namely AQUAZONE, able to assist decision makers in various natural resource management projects. The proposed system integrates remote sensing image processing methods, from display operations, to analysis results, in order to extract useful knowledge to best natural resource management, and in particular define the extension of Sebkha Lake of Oran (Algeria).
Two methods were applied to classify LANDSAT 5 TM images of Oran (Algeria): Fuzzy C-Means (FCM) applied on multi spectral images, and the other that comes with the manual which is the Ordered Queue-based Watershed (OQW). The FCM will serve as initialization phase, to automatically discover the different classes (urban, forest, water, etc..) from a LANDSAT 5 TM images, and also minimize ambiguity in grayscale and establish Land cover map of this region.

[...] Read more.
Implementation of Computer Vision Based Industrial Fire Safety Automation by Using Neuro-Fuzzy Algorithms

By Manjunatha K.C. Mohana H.S P.A Vijaya

DOI: https://doi.org/10.5815/ijitcs.2015.04.02, Pub. Date: 8 Mar. 2015

A computer vision-based automated fire detection and suppression system for manufacturing industries is presented in this paper. Automated fire suppression system plays a very significant role in Onsite Emergency System (OES) as it can prevent accidents and losses to the industry. A rule based generic collective model for fire pixel classification is proposed for a single camera with multiple fire suppression chemical control valves. Neuro-Fuzzy algorithm is used to identify the exact location of fire pixels in the image frame. Again the fuzzy logic is proposed to identify the valve to be controlled based on the area of the fire and intensity values of the fire pixels. The fuzzy output is given to supervisory control and data acquisition (SCADA) system to generate suitable analog values for the control valve operation based on fire characteristics. Results with both fire identification and suppression systems have been presented. The proposed method achieves up to 99% of accuracy in fire detection and automated suppression.

[...] Read more.
A Framework for Effective Object-Oriented Software Change Impact Analysis

By Bassey Isong Obeten Ekabua

DOI: https://doi.org/10.5815/ijitcs.2015.04.03, Pub. Date: 8 Mar. 2015

Object-oriented (OO) software have complex dependencies and different change types which frequently affect their maintenance in terms of ripple-effects identification or may likely introduce some faults which are hard to detect. As change is both important and risky, change impact analysis (CIA) is a technique used to preserve the quality of the software system. Several CIA techniques exist but they provide little or no clear information on OO software system representation for effective change impact prediction. Additionally, OO classes are not faults or failures-free and their fault-proneness is not considered during CIA. There is no known CIA approach that incorporates both change impact and fault prediction. Consequently, making changes to software components while neglecting their dependencies and fault-proneness may have some unexpected effects on their quality or may increase their failure risks. Therefore, this paper proposes a novel framework for OO software CIA that allows for impact and fault predictions. Moreover, an intermediate OO program representation that explicitly represents the software and allows its structural complexity to be quantified using complex networks is proposed. The objective is to enhance static CIA and facilitate program comprehension. To assess its effectiveness, a controlled experiment was conducted using students’ project with respect to maintenance duration and correctness. The results obtained were promising, indicating its importance for impact analysis.

[...] Read more.
Identity Management: Lightweight SAML for Less Processing Power

By Mohammed Ali Tarek S. Sobh Salwa El-Gamal

DOI: https://doi.org/10.5815/ijitcs.2015.04.04, Pub. Date: 8 Mar. 2015

Identity management has emerged as important issue for reducing complexity and improving user experience when accessing services. In addition to, recently authentication services added SAML to the range of authentication options to be available to cloud subscriber. This work mainly focused on SAML representation in the existing identity management frameworks and its suitability.
We introduced a new representation of SAML that makes it light in weight and easier in parsing and dealing with. This representation is demonstrated using JSON.
In our new SAML representation, we enhanced the performance of marshalling the SAML by 28.99%.
In this paper, we will go into these challenges to introduce a new representation for the identity and access management markup language. Our proposed representation is designed to match the small processing power devices for faster generation, parsing and communication.

[...] Read more.
Regression Test Case Selection for Multi-Objective Optimization Using Metaheuristics

By Rahul Chaudhary Arun Prakash Agrawal

DOI: https://doi.org/10.5815/ijitcs.2015.04.05, Pub. Date: 8 Mar. 2015

A new heuristic algorithm is proposed by this paper, on multi-objective optimization using metaheuristics and TSP (travelling salesman problems). Basic thinking behind this algorithm is minimizing the TSP path or tour by dividing the entire tour into blocks that are overlapped to each other and then improve each individual block separately. Although it is unproven that a good solution have small improvement chances if a node moved far way to a position compared to its original solution. By intensively searching each block, further improvement is possible in TSP path or tour that never be supported in various search methods and genetic algorithm. Proposed algorithm and computational experiment performance was tested, and these tests are carried out with support of already present instances of problem. According to the results represented by paper, the computation verifies that proposed algorithm can solve TSPs efficiently. Proposed algorithm is then used for selecting optimal test cases, thousands of those test cases which are selected after confirming that they identify bugs and they itself selected from a repository of test cases; these thousand test cases are those test cases which are selected from several thousand test cases because they detect bugs. Few test cases from repository act as milestones (nodes) and having certain weight associated with each, proposed algorithm based on TSP implemented over selected result and select the optimal result or path or solution. These selected optimal test cases or selected path are further used to perform the regression testing, by applying those test cases selected by proposed algorithm in order to remove most of the faults or bugs effectively, i.e. take less time and identify almost all the bugs with few test cases. Hence this proposed algorithm assures most effective solution for regression testing test case selection.

[...] Read more.
Hypervisors’ Guest Isolation Capacity Evaluation in the Private Cloud Using SIAGR Framework

By P. Vijaya Vardhan Reddy Lakshmi Rajamani

DOI: https://doi.org/10.5815/ijitcs.2015.04.06, Pub. Date: 8 Mar. 2015

Hypervisor vendors do claim that they have negated virtualization overhead compared to a native system. They also state that complete guest isolation is achieved while running multiple guest operating systems (OSs) on their hypervisors. But in a virtualization environment which is a combination of hardware, hypervisor and virtual machines (VMs) with guest operating systems, there bound to be an impact on each guest operating system while other guest operating systems are fully utilizing their allotted system resources. It is interesting to study hypervisor’s guest isolation capacity while several guest operating systems running on it. This paper selected three hypervisors, namely ESXi 4.1, XenServer 6.0 and KVM (Ubuntu 12.04 Server) for the experimentation. The three hypervisors are prudently preferred as they represent three different categories (full virtualized, para-virtualized, and hybrid virtualized). Focus being on hypervisors’ guest isolation capacity evaluation, therefore, private cloud is chosen over public cloud as it has fewer security concerns. Private Cloud is created using apache’s CloudStack. Windows 7 OS is deployed as a guest VM on each hypervisor and their guest isolation capacity is evaluated for CPU and Network performances.

[...] Read more.
Measurement of Usability of Office Application Using a Fuzzy Multi-Criteria Technique

By Sanjay Kumar Dubey Sumit Pandey

DOI: https://doi.org/10.5815/ijitcs.2015.04.07, Pub. Date: 8 Mar. 2015

Software Quality is very important aspect for any software development company. Software quality measurement is also a major concern for improving the software applications in software development processes in these companies. The quantification of various quality factors and integrate them into various software quality models is very important to analyze the quality of software system. Software usability is one of the important quality factors now days due to the increasing demand of interactive and user friendly software systems.
In this paper, an attempt has been made to quantifying the usability of Ms-Excel 2007 and Ms-Excel 2010 application software using ISO/IEC 9126 model and compare the numeric value of usability for both version of Ms-Excel 2007 and Ms-Excel 2010. Due to the random nature of the usability attributes, the fuzzy multi criteria decision technique has been used to evolve the usability of the software office application. The present method will be helpful to analyze and enhance the quality of interactive software system.

[...] Read more.
Delay Scheduling Based Replication Scheme for Hadoop Distributed File System

By S.Suresh N.P. Gopalan

DOI: https://doi.org/10.5815/ijitcs.2015.04.08, Pub. Date: 8 Mar. 2015

The data generated and processed by modern computing systems burgeon rapidly. MapReduce is an important programming model for large scale data intensive applications. Hadoop is a popular open source implementation of MapReduce and Google File System (GFS). The scalability and fault-tolerance feature of Hadoop makes it as a standard for BigData processing. Hadoop uses Hadoop Distributed File System (HDFS) for storing data. Data reliability and fault-tolerance is achieved through replication in HDFS. In this paper, a new technique called Delay Scheduling Based Replication Algorithm (DSBRA) is proposed to identify and replicate (dereplicate) the popular (unpopular) files/blocks in HDFS based on the information collected from the scheduler. Experimental results show that, the proposed method achieves 13% and 7% improvements in response time and locality over existing algorithms respectively.

[...] Read more.
An Approach for Test Case Prioritization Based on Three Factors

By Manika Tyagi Sona Malhotra

DOI: https://doi.org/10.5815/ijitcs.2015.04.09, Pub. Date: 8 Mar. 2015

The main aim of regression testing is to test the modified software during maintenance level. It is an expensive activity, and it assures that modifications performed in software are correct. An easiest strategy to regression testing is to re-test all test cases in a test suite, but due to limitation of resource and time, it is inefficient to implement. Therefore, it is necessary to discover the techniques with the goal of increasing the regression testing’s effectiveness, by arranging test cases of test suites according to some objective criteria. Test case prioritization intends to arrange test cases in such a manner that higher priority test cases execute earlier than test cases of lower priority according to some performance criteria. This paper presents an approach to prioritize regression test cases based on three factors which are rate of fault detection [6], percentage of fault detected and risk detection ability. The proposed approach is compared with different prioritization techniques such as no prioritization, reverse prioritization, random prioritization, and also with previous work of kavitha et al [6], using APFD (average percentage of fault detected) metric. The results represent that proposed approach outperformed all approaches mentioned above.

[...] Read more.
PTSLGA: A Provenance Tracking System for Linked Data Generating Application

By Kumar Sharma Ujjal Marjit Utpal Biswas

DOI: https://doi.org/10.5815/ijitcs.2015.04.10, Pub. Date: 8 Mar. 2015

Tracking provenance of RDF resources is an important task in Linked Data generating applications. It takes on a central function in gathering information as well as workflow. Various Linked Data generating applications have evolved for converting legacy data to RDF resources. These data belong to bibliographic, geographic, government, publications, and cross-domains. However, most of them do not support tracking data and workflow provenance for individual RDF resources. In such cases, it is required for those applications to track, store and disseminate provenance information describing their source data and involved operations. In this article, we introduce an approach for tracking provenance of RDF resources. Provenance information is tracked during the conversion process and it is stored into the triple store. Thereafter, this information is disseminated using provenance URIs. The proposed framework has been analyzed using Harvard Library Bibliographic Datasets. The evaluation has been made on datasets through converting legacy data into RDF and Linked Data with provenance. The outcome has been quiet promising in the sense that it enables data publishers to generate relevant provenance information while taking less time and efforts.

[...] Read more.