International Journal of Information Technology and Computer Science (IJITCS)

IJITCS Vol. 9, No. 7, Jul. 2017

Cover page and Table of Contents: PDF (size: 188KB)

Table Of Contents

REGULAR PAPERS

Meta-Population Modelling and Simulation of the Dynamic of Malaria Transmission with Influence of Climatic Factors

By Justin-Herve NOUBISSI Jean Claude Kamgang Eric Ramat Januarius Asongu Christophe Cambier

DOI: https://doi.org/10.5815/ijitcs.2017.07.01, Pub. Date: 8 Jul. 2017

We model the dynamic of malaria transmission taking into account climatic factors and the migration between Douala and Yaounde´, Yaounde´ and Ngaounde´re´, three cities of Cameroon country. We show how variations of climatic factors such as temperature and relative humidity affect the malaria spread. We propose a meta-population model of the dynamic transmission of malaria that evolves in space and time and that takes into account temperature and relative humidity and the migration between Douala and Yaounde´, Yaounde´ and Ngaounde´re´. More, we integrate the variation of environmental factors as events also called mathematical impulsion that can disrupt the model evolution at any time. Our modelling has been done using the Discrete EVents System Specification (DEVS) formalism. Our implementation has been done on Virtual Laboratory Environment (VLE) that uses DEVS formalism and abstract simulators for coupling models by integrating the concept of DEVS.

[...] Read more.
Analysing Open Source Software in Terms of Its Characteristics and Establishing New Paradigms

By Harmaninder J. S. Sidhu Sawtantar S. Khurmi

DOI: https://doi.org/10.5815/ijitcs.2017.07.02, Pub. Date: 8 Jul. 2017

The world that we see and in which we live is driven by open source platforms. Whether we talk about Linux and Apache or Drupal and Joomla (all are open source platforms), right from the beginning the open source technology has always been influencing. If we go online and try to find out open source link, we find that it is difficult to find a site/online application without an open source connection. This paper examines Free and Open Source Software (FOSS) in an empirical setup in terms of its major characteristics i.e. Deployability and Usability. These two characteristics of FOSS are ex-tremely important from point of view of its comparison with proprietary software. The different attributes of FOSS were identified in the literature and were made part of the present study to carry out empirical analysis of FOSS. Apart from this a number of attributes were also included in this empirical study those were agreed upon by the participants while carrying out pilot study. This paper throws light on the experience of different kinds of users associated with FOSS. The statistical analysis using fisher's exact test was used to conclude the dependability of important characteristics of FOSS on each other.

[...] Read more.
Comparison of Time Concept Modeling for Querying Temporal Information in OWL and RDF

By Bahareh Bahadorani Ahmad Zaeri

DOI: https://doi.org/10.5815/ijitcs.2017.07.03, Pub. Date: 8 Jul. 2017

Ontology is an important factor in the integration of heterogeneous semantic information. Description logic, as a formal language for expressing ontologies, does not include the necessary features to create a temporal dimension in the relationships among concepts. It is critical to introduce time concepts to model temporal data and relate them to other non-temporal data recorded in ontology. Current query languages in the semantic web are not able to respond to temporal questions; thus, another important issue is to have the appropriate methods for answering temporal questions. In this paper, temporal modeling methods in OWL and RDF are assessed and the temporal query languages for expressing queries in the semantic web are categorized and compared.

[...] Read more.
Determining the Degree of Knowledge Processing in Semantics through Probabilistic Measures

By Rashmi S Hanumanthappa M

DOI: https://doi.org/10.5815/ijitcs.2017.07.04, Pub. Date: 8 Jul. 2017

World Wide Web is a huge repository of information. Retrieving data patterns is facile by using data mining techniques. However identifying the knowledge is tough, tough because the knowledge should be meaningful. Semantics, a branch of linguistics, defines the process of supplying knowledge to the computer system. The underlying idea of semantics is to understand the language model and its correspondence with the meaning associability. Though semantics indicates a crucial ingredient for language processing, the degree of work composition done in this area is minimal. This paper presents an ongoing semantic research problem thereby investigating the theory and rule representation. Probabilistic approach for semantics is demonstrated to address the semantics knowledge representation. The inherit requirement for our system is to have the language syntactically correct. This approach identifies the meaning of the sentence at word-level. The accuracy of the proposed architecture is studied in terms of recall and precision measures. From the experiments conducted, it is clear that the probabilistic model for semantics is able to associate the language model at a preliminary level.

[...] Read more.
A Systematic Literature Review on SMS Spam Detection Techniques

By Lutfun Nahar Lota B M Mainul Hossain

DOI: https://doi.org/10.5815/ijitcs.2017.07.05, Pub. Date: 8 Jul. 2017

Spam SMSes are unsolicited messages to users, which are disturbing and sometimes harmful. There are a lot of survey papers available on email spam detection techniques. But, SMS spam detection is comparatively a new area and systematic literature review on this area is insufficient. In this paper, we perform a systematic literature review on SMS spam detection techniques. For that purpose, we consider the available published research works from 2006 to 2016. We choose 17 papers for our study and reviewed their used techniques, approaches and algorithms, their advantages and disadvantages, evaluation measures, discussion on datasets and finally result comparison of the studies. Although, the SMS spam detection techniques are more challenging than email spam detection techniques because of the regional contents, use of abbreviated words, unfortunately none of the existing research addresses these challenges. There is a huge scope of future research in this area and this survey can act as a reference point for the future direction of research.

[...] Read more.
A Clustering-based Offline Signature Verification System for Managing Lecture Attendance

By Laruba Adama Hamza O. Salami

DOI: https://doi.org/10.5815/ijitcs.2017.07.06, Pub. Date: 8 Jul. 2017

Attendance management in the classroom is important because in many educational institutions, sufficient number of class attendance is a requirement for earning a regular grade in a course. Automatic signature verification is an active research area from both scientific and commercial points of view as signatures are the most legally and socially acceptable means of identification and authorization of an individual. Different approaches have been developed to achieve accurate verification of signatures. This paper proposes a novel automatic lecture attendance verification system based on unsupervised learning. Here, lecture attendance verification is addressed as an offline signature verification problem since signatures are recorded offline on lecture attendance sheets. The system involved three major phases: preprocessing, feature extraction and verification phases. In the feature extraction phase, a novel set of features based on distribution of black pixels along columns of signatures images is also proposed. A mean square error of 0.96 was achieved when the system was used to predict the number of times students attended lectures for a given course.

[...] Read more.
An Exploratory Analysis between the Feature Selection Algorithms IGMBD and IGChiMerge

By P.Kalpana K.Mani

DOI: https://doi.org/10.5815/ijitcs.2017.07.07, Pub. Date: 8 Jul. 2017

Most of the data mining and machine learning algorithms will work better with discrete data rather than continuous. But the real time data need not be always discrete and thus it is necessary to discretize the continuous features. There are several discretization methods available in the literature. This paper compares the two methods Median Based Discretization and ChiMerge discretization. The discretized values obtained using both methods are used to find the feature relevance using Information Gain. Using the feature relevance, the original features are ranked by both methods and the top ranked attributes are selected as the more relevant ones. The selected attributes are then fed into the Naive Bayesian Classifier to determine the predictive accuracy. The experimental results clearly show that the performance of the Naive Bayesian Classifier has improved significantly for the features selected using Information Gain with Median Based Discretization than Information Gain with ChiMerge discretization.

[...] Read more.
An Efficient String Matching Technique for Desktop Search to Detect Duplicate Files

By S. Vijayarani M.Muthulakshmi

DOI: https://doi.org/10.5815/ijitcs.2017.07.08, Pub. Date: 8 Jul. 2017

Information retrieval is used to identify the relevant documents in a document collection, which is matching a user's query. It also refers to the automatic retrieval of documents from the large document corpus. The most important application of information retrieval system is search engine like Google, which identify those documents on the World Wide Web that are relevant to user queries. In most situations, users may download the files that are already downloaded and stored in their computer. Then, there is a chance of multiple copies of the files that are already stored in different drives and folders on the system, which in turn reduces the performance of the system and these files occupy a lot of memory space. Analyzing the contents of the file and finding their similarity is one of the major problems in text mining and information retrieval. The main objective of this research work is to analyze the file contents and deletes the duplicate files in the system. In order to perform this task, this research work proposes a new tool named Duplicate File Detector Tool i.e. DFDT. DFDT helps the user to search and delete duplicate files in the system at a minimum time. It also helps to delete the duplicate files not only with the same file category, but also with different file categories. Boyer Moore Horspool and Knuth Morris Pratt string searching algorithms are existing algorithms and these algorithms are used to compare the file contents for finding their similarity. This work also proposes a new string matching algorithm named as W2COM (Word to Word COMparison). From the experimental results it is observed that the newly proposed W2COM string matching algorithm performance is better than Boyer Moore Horspool and Knuth Morris Pratt algorithms.

[...] Read more.