International Journal of Information Technology and Computer Science (IJITCS)

IJITCS Vol. 11, No. 7, Jul. 2019

Cover page and Table of Contents: PDF (size: 186KB)

Table Of Contents

REGULAR PAPERS

A Robust Functional Minimization Technique to Protect Image Details from Disturbances

By Md. Robiul Islam Chen Xu Yu Han Sanjida Sultana Putul Rana Aamir Raza

DOI: https://doi.org/10.5815/ijitcs.2019.07.01, Pub. Date: 8 Jul. 2019

Image capturing using faulty systems or environmental vulnerabilities always degrade the image quality and causes the distortion of true details from the original imaging signals. Thus a robust way of image enhancement and edge preservation is an enormously requirement for smooth imaging operations. Although, many techniques have been deployed in this area during the decades for its betterment. However, the key challenges are remain towards better tradeoff between image enhancement and details protection. Therefore, this study inspects the existing limitations and proposes a robust technique based on functional minimization scheme in variational framework for ensuring better performance in case of image enhancement and details preservation simultaneously. A vigorous way to solve the minimization problem is also develop to make sure the efficiency of the proposed technique than some other traditional techniques.

[...] Read more.
A Failure Detector for Crash Recovery Systems in Cloud

By Bharati Sinha Awadhesh Kumar Singh Poonam Saini

DOI: https://doi.org/10.5815/ijitcs.2019.07.02, Pub. Date: 8 Jul. 2019

Cloud computing has offered remarkable scalability and elasticity to distributed computing paradigm. It provides implicit fault tolerance through virtual machine (VM) migration. However, VM migration needs heavy replication and incurs storage overhead as well as loss of computation. In early cloud infrastructure, these factors were ignorable due to light load conditions; however, nowadays due to exploding task population, they trigger considerable performance degradation in cloud. Therefore, fault detection and recovery is gaining attention in cloud research community. The Failure Detectors (FDs) are modules employed at the nodes to perform fault detection. The paper proposes a failure detector to handle crash recoverable nodes and the system recovery is performed by a designated checkpoint in the event of failure. We use Machine Repairman model to estimate the recovery latency. The simulation experiments have been carried out using CloudSim plus.

[...] Read more.
A hybrid Technique for Cleaning Missing and Misspelling Arabic Data in Data Warehouse

By Mohammed Abdullah Al-Hagery Latifah Abdullah Alreshoodi Maram Abdullah Almutairi Suha Ibrahim Alsharekh Emtenan Saad Alkhowaiter

DOI: https://doi.org/10.5815/ijitcs.2019.07.03, Pub. Date: 8 Jul. 2019

Real-World datasets accumulated over a number of years tend to be incomplete, inconsistent and contain noisy data, this, in turn, will cause an inconsistency of data warehouses. Data owners are having hundred-millions to billions of records written in different languages, hence continuously increases the need for comprehensive, efficient techniques to maintain data consistency and increase its quality. It is known that the data cleaning is a very complex and difficult task, especially for the data written in Arabic as a complex language, where various types of unclean data can occur to the contents. For example, missing values, dummy values, redundant, inconsistent values, misspelling, and noisy data. The ultimate goal of this paper is to improve the data quality by cleaning the contents of Arabic datasets from various types of errors, to produce data for better analysis and highly accurate results. This, in turn, leads to discover correct patterns of knowledge and get an accurate Decision-Making. This approach established based on the merging of different algorithms. It ensures that reliable methods are used for data cleansing. This approach cleans the Arabic datasets based on the multi-level cleaning using Arabic Misspelling Detection, Correction Model (AMDCM), and Decision Tree Induction (DTI). This approach can solve the problems of Arabic language misspelling, cryptic values, dummy values, and unification of naming styles. A sample of data before and after cleaning errors presented.

[...] Read more.
Automating Text Simplification Using Pictographs for People with Language Deficits

By Mai Farag Imam Amal Elsayed Aboutabl Ensaf H. Mohamed

DOI: https://doi.org/10.5815/ijitcs.2019.07.04, Pub. Date: 8 Jul. 2019

Automating text simplification is a challenging research area due to the compound structures present in natural languages. Social involvement of people with language deficits can be enhanced by providing them with means to communicate with the outside world, for instance using the internet independently. Using pictographs instead of text is one of such means. This paper presents a system which performs text simplification by translating text into pictographs. The proposed system consists of a set of phases. First, a simple summarization technique is used to decrease the number of sentences before converting them to pictures. Then, text preprocessing is performed including processes such as tokenization and lemmatization. The resulting text goes through a spelling checker followed by a word sense disambiguation algorithm to find words which are most suitable to the context in order to increase the accuracy of the result. Clearly, using WSD improves the results. Furthermore, when support vector machine is used for WSD, the system yields the best results. Finally, the text is translated into a list of images. For testing and evaluation purposes, a test corpus of 37 Basic English sentences has been manually constructed. Experiments are conducted by presenting the list of generated images to ten normal children who are asked to reproduce the input sentences based on the pictographs. The reproduced sentences are evaluated using precision, recall, and F-Score. Results show that the proposed system enhances pictograph understanding and succeeds to convert text to pictograph with precision, recall and F-score of over 90% when SVM is used for word sense disambiguation, also all these techniques are not combined together before which increases the accuracy of the system over all other studies.

[...] Read more.
Evaluating and Comparing Size, Complexity and Coupling Metrics as Web Applications Vulnerabilities Predictors

By Mohammed Zagane Mustapha Kamel Abdi

DOI: https://doi.org/10.5815/ijitcs.2019.07.05, Pub. Date: 8 Jul. 2019

Most security and privacy issues in software are related to exploiting code vulnerabilities. Many studies have tried to find the correlation between the software characteristics (complexity, coupling, etc.) quantified by corresponding code metrics and its vulnerabilities and to propose automatic prediction models that help developers locate vulnerable components to minimize maintenance costs. The results obtained by these studies cannot be applied directly to web applications because a web application differs in many ways from a non-web application: development, use, etc. and a lot of evaluation of these conclusions has to be made. The purpose of this study is to evaluate and compare the vulnerabilities prediction power of three types of code metrics in web applications.  There are a few similar studies that targeted non-web application and to the best of our knowledge, there are no similar studies that targeted web applications. The results obtained show that unlike non-web applications where complexity metrics have better vulnerability prediction power, in web applications the metrics that give better prediction are the coupling metrics with high recall (> 75%) and fewer costs in terms of inspection (<25%).

[...] Read more.
A Stochastic Model for Simple Document Processing

By Pierre Moukeli Mbindzoukou Arsene Roland MOUKOUKOU David NACCACHE Nino TSKHOVREBASHVILI

DOI: https://doi.org/10.5815/ijitcs.2019.07.06, Pub. Date: 8 Jul. 2019

This work focuses on the stationary behavior of a simple document processing system. We mean by simple document, any document whose processing, at each stage of its progression in its graph of processing, is assured by a single person. Our simple document processing system derives from the general model described by MOUKELI and NEMBE. It is about an adaptation of the said general model to determine in terms of metrics and performance, its behavior in the particular case of simple document processing. By way of illustration, data relating to a station of a central administration of a ministry, observed over six (6) years, were presented. The need to study this specific case comes from the fact that the processing of simple documents is based on a hierarchical organization and the use of priority queues. As in the general model proposed by MOUKELI and NEMBE, our model has a static component and a dynamic component. The static component is a tree that represents the hierarchical organization of the processing stations. The dynamic component consists of a Markov process and a network of priority queues which model all waiting lines at each processing unit. Key performance indicators were defined and studied point by point and on average. As well as issues specific to the hierarchical model associated with priority queues have been analyzed and solutions proposed; it is mainly infinite loops.

[...] Read more.