International Journal of Information Technology and Computer Science (IJITCS)

ISSN: 2074-9007 (Print)

ISSN: 2074-9015 (Online)

DOI: https://doi.org/10.5815/ijitcs

Website: https://www.mecs-press.org/ijitcs

Published By: MECS Press

Frequency: 6 issues per year

Number(s) Available: 129

(IJITCS) in Google Scholar Citations / h5-index

IJITCS is committed to bridge the theory and practice of information technology and computer science. From innovative ideas to specific algorithms and full system implementations, IJITCS publishes original, peer-reviewed, and high quality articles in the areas of information technology and computer science. IJITCS is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of information technology and computer science applications.

 

IJITCS has been abstracted or indexed by several world class databases: Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..

Latest Issue
Most Viewed
Most Downloaded

IJITCS Vol. 16, No. 1, Feb. 2024

REGULAR PAPERS

SSKHOA: Hybrid Metaheuristic Algorithm for Resource Aware Task Scheduling in Cloud-fog Computing

By M. Santhosh Kumar K. Ganesh Reddy Rakesh Kumar Donthi

DOI: https://doi.org/10.5815/ijitcs.2024.01.01, Pub. Date: 8 Feb. 2024

Cloud fog computing is a new paradigm that combines cloud computing and fog computing to boost resource efficiency and distributed system performance. Task scheduling is crucial in cloud fog computing because it decides the way computer resources are divided up across tasks. Our study suggests that the Shark Search Krill Herd Optimization (SSKHOA) method be incorporated into cloud fog computing's task scheduling. To enhance both the global and local search capabilities of the optimization process, the SSKHOA algorithm combines the shark search algorithm and the krill herd algorithm. It quickly explores the solution space and finds near-optimal work schedules by modelling the swarm intelligence of krill herds and the predator-prey behavior of sharks. In order to test the efficacy of the SSKHOA algorithm, we created a synthetic cloud fog environment and performed some tests. Traditional task scheduling techniques like LTRA, DRL, and DAPSO were used to evaluate the findings. The experimental results demonstrate that the SSKHOA outperformed the baseline algorithms in terms of task success rate increased 34%, reduced the execution time by 36%, and reduced makespan time by 54% respectively.

[...] Read more.
Towards Effective Solid Waste Management: A Mobile Application for Coordinated Waste Collection and User-official Interaction

By Paudel A. Pant A. Manandhar A. Gautam B.

DOI: https://doi.org/10.5815/ijitcs.2024.01.02, Pub. Date: 8 Feb. 2024

Solid Waste Management is an especially important task related to human health and the environment. Due to ineffective scheduled date & time, poor communication between waste collecting institutions and local house owners, people are compelled to throw waste on streets which is not good. Even if there is a routine, people tend to miss the schedule. Our aim is to develop an application for mobile phones, which consists of two parties- the user and waste management officials, where the second one acts as reminders. Officials will send a notification to the user, signaling that they are at a certain checkpoint near the user and the user can now throw waste properly and not on the streets. An incremental model was used throughout our project; basic requirements are fulfilled first and then iterated to create the final product. The proposed application includes two portals for whether you are user or waste management personnel. This application helps to improve the coordination between clients and collectors and determines whether the waste in an area has been collected or not. The survey conducted in this study involved consulting the Environment and Agricultural Department of Kathmandu Metropolitan City, which highlighted the significance of a notifying application. This application addresses the issue of uncoordinated waste disposal by providing users with information about collection schedules, leading to better waste management practices and reduced unsystematic garbage disposal.

[...] Read more.
Comparative Study: Performance of MVC Frameworks on RDBMS

By M. H. Rahman M. Naderuzzaman M. A. Kashem B. M. Salahuddin Z. Mahmud

DOI: https://doi.org/10.5815/ijitcs.2024.01.03, Pub. Date: 8 Feb. 2024

The regular utilization of web-based applications is crucial in our everyday life. The Model View Controller (MVC) architecture serves as a structured programming design that developers utilize to create user interfaces. This pattern is commonly applied by application software developers to construct web-based applications. The use of a MVC framework of PHP Scripting language is often essential for application software development. There is a significant argument regarding the most suitable PHP MVC such as Codeigniter & Laravel and Phalcon frameworks since not all frameworks cater to everyone's needs. It's a fact that not all MVC frameworks are created equal and different frameworks can be combined for specific scenarios. Selecting the appropriate MVC framework can pose a challenge at times. In this context, our paper focuses on conducting a comparative analysis of different PHP frameworks. The widely used PHP MVC frameworks are picked to compare the performance on basic Operation of Relational databases and different type of Application software to calculate execution time. In this experiment a large (Big Data) dataset was used. The Mean values of insert operation in MySQL database of Codeigniter, Laravel, Phalcon were 149.64, 149.99, 145.48 and PostgreSQL database`s 48.259, 49.39, 45.87 respectively. The Mean values of Update operation in MySQL database of Codeigniter, Laravel, Phalcon were 149.64, 158.39, 207.82 and PostgreSQL database`s 48.24, 49.39, 46.64 respectively. The Mean values of Select operation in MySQL database of Codeigniter, Laravel, Phalcon were 1.60, 3.23, 0.98 and PostgreSQL database`s 1.95, 4.57, 2.36 respectively. The Mean values of Delete operation in MySQL database of Codeigniter, Laravel, Phalcon were 150.27, 156.99, 149.63 and PostgreSQL database`s 42.95, 48.25, 42.07 respectively. The findings from our experiment can be advantageous for web application developers to choose proper MVC frameworks with their integrated development environment (IDE). This result will be helpful for small, medium & large-scale organization in choosing the appropriate PHP Framework.

[...] Read more.
The Impact of Financial Statement Integration in Machine Learning for Stock Price Prediction

By Febrian Wahyu Christanto Victor Gayuh Utomo Rastri Prathivi Christine Dewi

DOI: https://doi.org/10.5815/ijitcs.2024.01.04, Pub. Date: 8 Feb. 2024

In the capital market, there are two methods used by investors to make stock price predictions, namely fundamental analysis, and technical analysis. In computer science, it is possible to make prediction, including stock price prediction, use Machine Learning (ML). While there is research result that said both fundamental and technical parameter should give an optimum prediction there is lack of confirmation in Machine Learning to this result. This research conducts experi-ment using Support Vector Regression (SVR) and Support Vector Machine (SVM) as ML method to predict stock price. Further, the result is compared between 3 groups of parameters, technical only (TEC), financial statement only (FIN) and combination of both (COM). Our experimental results show that integrating financial statements has a neutral impact on SVR predictions but a positive impact on SVM predictions and the accuracy value of the model in this research reached 83%.

[...] Read more.
Analysis of Threats and Cybersecurity in the Oil and Gas Sector within the Context of Critical Infrastructure

By Shakir A. Mehdiyev Mammad A. Hashimov

DOI: https://doi.org/10.5815/ijitcs.2024.01.05, Pub. Date: 8 Feb. 2024

This article explores the multifaceted challenges inherent in ensuring the cybersecurity of critical infrastructures, i.e., a linchpin of modern society and the economy, spanning pivotal sectors such as energy, transportation, and finance. In the era of accelerating digitalization and escalating dependence on information technology, safeguarding these infrastructures against evolving cyber threats becomes not just crucial but imperative. The examination unfolds by dissecting the vulnerabilities that plague critical infrastructures, probing into the diverse spectrum of threats they confront in the contemporary cybersecurity landscape. Moreover, the article meticulously outlines innovative security strategies designed to fortify these vital systems against malicious intrusions. A distinctive aspect of this work is the nuanced case study presented within the oil and gas sector, strategically chosen to illustrate the vulnerability of critical infrastructures to cyber threats. By examining this sector in detail, the article aims to shed light on industry-specific challenges and potential solutions, thereby enhancing our understanding of cybersecurity dynamics within critical infrastructures. This article contributes a comprehensive analysis of the challenges faced by critical infrastructures in the face of cyber threats, offering contemporary security strategies and leveraging a focused case study to deepen insights into the nuanced vulnerabilities within the oil and gas sector.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
A Comparative Analysis of Algorithms for Heart Disease Prediction Using Data Mining

By Snigdho Dip Howlader Tushar Biswas Aishwarjyo Roy Golam Mortuja Dip Nandi

DOI: https://doi.org/10.5815/ijitcs.2023.05.05, Pub. Date: 8 Oct. 2023

Heart disease is very common in today’s day and age, with death rates climbing up the numbers every year. Prediction of heart disease cases is a topic that has been around in the world of data and medical science for many years. The study conducted in this paper makes comparison of the different algorithms that have been used in pattern analysis and prediction of heart diseases. Among the algorithms that have been used in the past included a combination of machine learning and data mining concepts that essentially are derived from statistical analysis and relevant approaches. There are a lot of factors that can be considered when attempting to analytically predict instances of heart diseases, such as age, gender, resting blood pressure etc. Eight such factors have been taken into consideration for carrying out this qualitative comparison. As this study uses a particular data set for extracting results from, the output may vary when implemented over different data sets. The research includes comparisons of Naive Bayes, Decision Tree, Random Forest and Logistic Regression. After multiple implementations, the accuracy in training and testing are obtained and listed down. The observations from implementation of these algorithms over the same dataset indicates that Random Forest and Decision Tree have the highest accuracy in prediction of heart disease based on the dataset that we have provided. Similarly, Naive Bayes has the least accurate results for this scenario under the given contexts.

[...] Read more.
Development of an Interactive Dashboard for Analyzing Autism Spectrum Disorder (ASD) Data using Machine Learning

By Avishek Saha Dibakar Barua Mahbub C. Mishu Ziad Mohib Sumaya Binte Zilani Choya

DOI: https://doi.org/10.5815/ijitcs.2022.04.02, Pub. Date: 8 Aug. 2022

Autism Spectrum Disorder (ASD) is a neuro developmental disorder that affects a person's ability to communicate and interact with others for rest of the life. It affects a person's comprehension and social interactions. Furthermore, people with ASD experience a wide range of symptoms, including difficulties while interacting with others, repeated behaviors, and an inability to function successfully in other areas of everyday life. Autism can be diagnosed at any age and is referred to as a "behavioral disorder" since symptoms usually appear in the life's first two years. The majority of individuals are unfamiliar with the illness and so don't know whether or not a person is disordered. Rather than aiding the sufferer, this typically leads to his or her isolation from society. The problem with ASD starts in childhood and extends into adolescence and adulthood. In this paper, we studied 25 research articles on autism spectrum disorder (ASD) prediction using machine learning techniques. The data and findings of those publications using various approaches and algorithms are analyzed. Techniques are primarily assessed using four publicly accessible non-clinically ASD datasets. We found that support vector machine (SVM) and Convolutional Neural Network (CNN) provides most accurate results compare to other techniques. Therefore, we developed an interactive dashboard using Tableau and Python to analyze Autism data.

[...] Read more.
An Efficient Algorithm for Density Based Subspace Clustering with Dynamic Parameter Setting

By B.Jaya Lakshmi K.B.Madhuri M.Shashi

DOI: https://doi.org/10.5815/ijitcs.2017.06.04, Pub. Date: 8 Jun. 2017

Density based Subspace Clustering algorithms have gained their importance owing to their ability to identify arbitrary shaped subspace clusters. Density-connected SUBspace CLUstering(SUBCLU) uses two input parameters namely epsilon and minpts whose values are same in all subspaces which leads to a significant loss to cluster quality. There are two important issues to be handled. Firstly, cluster densities vary in subspaces which refers to the phenomenon of density divergence. Secondly, the density of clusters within a subspace may vary due to the data characteristics which refers to the phenomenon of multi-density behavior. To handle these two issues of density divergence and multi-density behavior, the authors propose an efficient algorithm for generating subspace clusters by appropriately fixing the input parameter epsilon. The version1 of the proposed algorithm computes epsilon dynamically for each subspace based on the maximum spread of the data. To handle data that exhibits multi-density behavior, the algorithm is further refined and presented in version2. The initial value of epsilon is set to half of the value resulted in the version1 for a subspace and a small step value 'delta' is used for finalizing the epsilon separately for each cluster through step-wise refinement to form multiple higher dimensional subspace clusters. The proposed algorithm is implemented and tested on various bench-mark and synthetic datasets. It outperforms SUBCLU in terms of cluster quality and execution time.

[...] Read more.
An E-Services Success Measurement Framework

By Abdel Nasser H. Zaied

DOI: https://doi.org/10.5815/ijitcs.2012.04.03, Pub. Date: 8 Apr. 2012

The introduction of e-service solutions within the public sector has primarily been concerned with moving away from traditional information monopolies and hierarchies. E-service aims at increasing the convenience and accessibility of government services and information to citizens. Providing services to the public through the Web may lead to faster and more convenient access to government services with fewer errors. It also means that governmental units may realize increased efficiencies, cost reductions, and potentially better customer service. The main objectives of this work are to study and identify the success criteria of e-service delivery and to propose a comprehensive, multidimensional framework of e-services success. To examine the validity of the proposed framework, a sample of 200 e-service users were asked to assess their perspectives towards e-service delivery in some Egyptian organizations. The results showed that the proposed framework is applicable and implementable in the e-services evaluation; it also shows that the proposed framework may assist decision makers and e-service system designers to consider different criteria and measures before committing to a particular choice of e-service or to evaluate any existing e-service system.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
A Fast Topological Parallel Algorithm for Traversing Large Datasets

By Thiago Nascimento Rodrigues

DOI: https://doi.org/10.5815/ijitcs.2023.01.01, Pub. Date: 8 Feb. 2023

This work presents a parallel implementation of a graph-generating algorithm designed to be straightforwardly adapted to traverse large datasets. This new approach has been validated in a correlated scenario known as the word ladder problem. The new parallel algorithm induces the same topological structure proposed by its serial version and also builds the shortest path between any pair of words to be connected by a ladder of words. The implemented parallelism paradigm is the Multiple Instruction Stream - Multiple Data Stream (MIMD) and the test suite embraces 23-word ladder instances whose intermediate words were extracted from a dictionary of 183,719 words (dataset). The word morph quality (the shortest path between two input words) and the word morph performance (CPU time) were evaluated against a serial implementation of the original algorithm. The proposed parallel algorithm generated the optimal solution for each pair of words tested, that is, the minimum word ladder connecting an initial word to a final word was found. Thus, there was no negative impact on the quality of the solutions comparing them with those obtained through the serial ANG algorithm. However, there was an outstanding improvement considering the CPU time required to build the word ladder solutions. In fact, the time improvement was up to 99.85%, and speedups greater than 2.0X were achieved with the parallel algorithm.

[...] Read more.
Detecting and Preventing Common Web Application Vulnerabilities: A Comprehensive Approach

By Najla Odeh Sherin Hijazi

DOI: https://doi.org/10.5815/ijitcs.2023.03.03, Pub. Date: 8 Jun. 2023

Web applications are becoming very important in our lives as many sensitive processes depend on them. Therefore, it is critical for safety and invulnerability against malicious attacks. Most studies focus on ways to detect these attacks individually. In this study, we develop a new vulnerability system to detect and prevent vulnerabilities in web applications. It has multiple functions to deal with some recurring vulnerabilities. The proposed system provided the detection and prevention of four types of vulnerabilities, including SQL injection, cross-site scripting attacks, remote code execution, and fingerprinting of backend technologies. We investigated the way worked for every type of vulnerability; then the process of detecting each type of vulnerability; finally, we provided prevention for each type of vulnerability. Which achieved three goals: reduce testing costs, increase efficiency, and safety. The proposed system has been validated through a practical application on a website, and experimental results demonstrate its effectiveness in detecting and preventing security threats. Our study contributes to the field of security by presenting an innovative approach to addressing security concerns, and our results highlight the importance of implementing advanced detection and prevention methods to protect against potential cyberattacks. The significance and research value of this survey lies in its potential to enhance the security of online systems and reduce the risk of data breaches.

[...] Read more.
Development of IoT Cloud-based Platform for Smart Farming in the Sub-saharan Africa with Implementation of Smart-irrigation as Test-Case

By Supreme A. Okoh Elizabeth N. Onwuka Bala A. Salihu Suleiman Zubairu Peter Y. Dibal Emmanuel Nwankwo

DOI: https://doi.org/10.5815/ijitcs.2023.02.01, Pub. Date: 8 Apr. 2023

UN Department of Economics and Social Affairs predicted that the world population will increase by 2 billion in 2050 with over 50% from the Sub-Saharan Africa (SSA). Considering the level of poverty and food insecurity in the region, there is an urgent need for a sustainable increase in agricultural produce. However, farming approach in the region is primarily traditional. Traditional farming is characterized by high labor costs, low production, and under/oversupply of farm inputs. All these factors make farming unappealing to many. The use of digital technologies such as broadband, Internet of Things (IoT), Cloud computing, and Big Data Analytics promise improved returns on agricultural investments and could make farming appealing even to the youth. However, initial cost of smart farming could be high. Therefore, development of a dedicated IoT cloud-based platform is imperative. Then farmers could subscribe and have their farms managed on the platform. It should be noted that majority of farmers in SSA are smallholders who are poor, uneducated, and live in rural areas but produce about 80% of the food. They majorly use 2G phones, which are not internet enabled. These peculiarities must be factored into the design of any functional IoT platform that would serve this group. This paper presents the development of such a platform, which was tested with smart irrigation of maize crops in a testbed. Besides the convenience provided by the smart system, it recorded irrigation water saving of over 36% compared to the control method which demonstrates how irrigation is done traditionally.

[...] Read more.
A Trust Management System for the Nigerian Cyber-health Community

By Ifeoluwani Jenyo Elizabeth A. Amusan Justice O. Emuoyibofarhe

DOI: https://doi.org/10.5815/ijitcs.2023.01.02, Pub. Date: 8 Feb. 2023

Trust is a basic requirement for the acceptance and adoption of new services related to health care, and therefore, vital in ensuring that the integrity of shared patient information among multi-care providers is preserved and that no one has tampered with it. The cyber-health community in Nigeria is in its infant stage with health care systems and services being mostly fragmented, disjointed, and heterogeneous with strong local autonomy and distributed among several healthcare givers platforms. There is the need for a trust management structure for guaranteed privacy and confidentiality to mitigate vulnerabilities to privacy thefts. In this paper, we developed an efficient Trust Management System that hybridized Real-Time Integrity Check (RTIC) and Dynamic Trust Negotiation (DTN) premised on the Confidentiality, Integrity, and Availability (CIA) model of information security. This was achieved through the design and implementation of an indigenous and generic architectural framework and model for a secured Trust Management System with the use of the advanced encryption standard (AES-256) algorithm for securing health records during transmission. The developed system achieved Reliabity score, Accuracy and Availability of 0.97, 91.30% and 96.52% respectively.

[...] Read more.
Ontology-driven Intelligent IT Incident Management Model

By Bisrat Betru Fekade Getahun

DOI: https://doi.org/10.5815/ijitcs.2023.01.04, Pub. Date: 8 Feb. 2023

A significant number of Information Technology incidents are reported through email. To design and implement an intelligent incident management system, it is significant to automatically classify the reported incident to a given incident category. This requires the extraction of semantic content from the reported email text. In this research work, we have attempted to classify a reported incident to a given category based on its semantic content using ontology. We have developed an Incident Ontology that can serve as a knowledge base for the incident management system. We have also developed an automatic incident classifier that matches the semantical units of the incident report with concepts in the incident ontology. According to our evaluation, ontology-driven incident classification facilitates the process of Information Technology incident management in a better way since the model shows 100% recall, 66% precision, and 79% F1-Score for sample incident reports.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
A Systematic Literature Review of Studies Comparing Process Mining Tools

By Cuma Ali Kesici Necmettin Ozkan Sedat Taskesenlioglu Tugba Gurgen Erdogan

DOI: https://doi.org/10.5815/ijitcs.2022.05.01, Pub. Date: 8 Oct. 2022

Process Mining (PM) and PM tool abilities play a significant role in meeting the needs of organizations in terms of getting benefits from their processes and event data, especially in this digital era. The success of PM initiatives in producing effective and efficient outputs and outcomes that organizations desire is largely dependent on the capabilities of the PM tools. This importance of the tools makes the selection of them for a specific context critical. In the selection process of appropriate tools, a comparison of them can lead organizations to an effective result. In order to meet this need and to give insight to both practitioners and researchers, in our study, we systematically reviewed the literature and elicited the papers that compare PM tools, yielding comprehensive results through a comparison of available PM tools. It specifically delivers tools’ comparison frequency, methods and criteria used to compare them, strengths and weaknesses of the compared tools for the selection of appropriate PM tools, and findings related to the identified papers' trends and demographics. Although some articles conduct a comparison for the PM tools, there is a lack of literature reviews on the studies that compare PM tools in the market. As far as we know, this paper presents the first example of a review in literature in this regard.

[...] Read more.
Comparative Analysis of Data Mining Techniques to Predict Cardiovascular Disease

By Md. Al Muzahid Nayim Fahmidul Alam Md. Rasel Ragib Shahriar Dip Nandi

DOI: https://doi.org/10.5815/ijitcs.2022.06.03, Pub. Date: 8 Dec. 2022

Cardiovascular disease is the leading cause of death. In recent days, most people are living with cardiovascular disease because of their unhealthy lifestyle and the most alarming issue is the majority of them do not get any symptoms in the early stage. This is why this disease is becoming more deadly. However, medical science has a large amount of data regarding cardiovascular disease, so this data can be used to apply data mining techniques to predict cardiovascular disease at the early stage to reduce its deadly effect. Here, five data mining classification techniques, such as: Naïve Bayes, K-Nearest Neighbors, Support Vector Machine, Random Forest and Decision Tree were implemented in the WEKA tool to get the best accuracy rate and a dataset of 12 attributes with more than 300 instances was used to apply all the data mining techniques to get the best accuracy rate. After doing this research people who are at the early stage of cardiovascular disease or probably going to be a victim can be identified more accurately.

[...] Read more.