International Journal of Information Technology and Computer Science (IJITCS)

ISSN: 2074-9007 (Print)

ISSN: 2074-9015 (Online)

DOI: https://doi.org/10.5815/ijitcs

Website: https://www.mecs-press.org/ijitcs

Published By: MECS Press

Frequency: 6 issues per year

Number(s) Available: 131

(IJITCS) in Google Scholar Citations / h5-index

IJITCS is committed to bridge the theory and practice of information technology and computer science. From innovative ideas to specific algorithms and full system implementations, IJITCS publishes original, peer-reviewed, and high quality articles in the areas of information technology and computer science. IJITCS is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of information technology and computer science applications.

 

IJITCS has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..

Latest Issue
Most Viewed
Most Downloaded

IJITCS Vol. 16, No. 3, Jun. 2024

REGULAR PAPERS

Analyzing Test Performance of BSIT Students and Question Quality: A Study on Item Difficulty Index and Item Discrimination Index for Test Question Improvement

By Cris Norman P. Olipas Ruth G. Luciano

DOI: https://doi.org/10.5815/ijitcs.2024.03.01, Pub. Date: 8 Jun. 2024

This study presents a comprehensive assessment of the test performance of Bachelor of Science in Information Technology (BSIT) students in the System Integration and Architecture (SIA) course, coupled with a meticulous examination of the quality of test questions, aiming to lay the groundwork for enhancing the assessment tool. Employing a cross-sectional research design, the study involved 200 fourth-year students enrolled in the course. The results illuminated a significant discrepancy in scores between upper and lower student cohorts, highlighting the necessity for targeted interventions, curriculum enhancements, and assessment refinements, particularly for those in the lower-performing group. Further examination of the item difficulty index of the assessment tool unveiled the need to fine-tune certain items to better suit a broader spectrum of students. Nevertheless, the majority of items were deemed adequately aligned with their respective difficulty levels. Additionally, an analysis of the item discrimination index identified 25 items suitable for retention, while 27 items warranted revision, and 3 items were suitable for removal, as per the analysis outcomes. These insights provide a valuable foundation for improving the assessment tool, thereby optimizing its capacity to evaluate students' acquired knowledge effectively. The study's novel contribution lies in its integration of both student performance assessment and evaluation of assessment tool quality within the BSIT program, offering actionable insights for improving educational outcomes. By identifying challenges faced by BSIT students and proposing targeted interventions, curriculum enhancements, and assessment refinements, the research advances our understanding of effective assessment practices. Furthermore, the detailed analysis of item difficulty and discrimination indices offers practical guidance for enhancing the reliability and validity of assessment tools in the BSIT program. Overall, this research contributes to the existing body of knowledge by providing empirical evidence and actionable recommendations tailored to the needs of BSIT students, promoting educational quality and student success in Information Technology.

[...] Read more.
Enhancing Brain Tumor Classification in MRI: Leveraging Deep Convolutional Neural Networks for Improved Accuracy

By Shourove Sutradhar Dip Md. Habibur Rahman Nazrul Islam Md. Easin Arafat Pulak Kanti Bhowmick Mohammad Abu Yousuf

DOI: https://doi.org/10.5815/ijitcs.2024.03.02, Pub. Date: 8 Jun. 2024

Brain tumors are among the deadliest forms of cancer, and there is a significant death rate in patients. Identifying and classifying brain tumors are critical steps in understanding their functioning. The best way to treat a brain tumor depends on its type, size, and location. In the modern era, Radiologists utilize Brain tumor locations that can be determined using magnetic resonance imaging (MRI). However, manual tests and MRI examinations are time-consuming and require skills. In addition, misdiagnosis of tumors can lead to inappropriate medical therapy, which could reduce their chances of living. As technology advances in Deep Learning (DL), Computer Assisted Diagnosis (CAD) as well as Machine Learning (ML) technique has been developed to aid in the detection of brain tumors, radiologists can now more accurately identify brain tumors. This paper proposes an MRI image classification using a VGG16 model to make a deep convolutional neural network (DCNN) architecture. The proposed model was evaluated with two sets of brain MRI data from Kaggle. Considering both datasets during the training at Google Colab, the proposed method achieved significant performance with a maximum overall accuracy of 96.67% and 97.67%, respectively. The proposed model was reported to have worked well during the training period and been highly accurate. The proposed model's performance criteria go beyond existing techniques.

[...] Read more.
Information Security based on IoT for e-Health Care Using RFID Technology and Steganography

By Bahubali Akiwate Sanjay Ankali Shantappa Gollagi Norjihan Abdul Ghani

DOI: https://doi.org/10.5815/ijitcs.2024.03.03, Pub. Date: 8 Jun. 2024

The Internet of Things (IoT) allows you to connect a broad spectrum of smart devices through the Internet. Incorporating IoT sensors for remote health monitoring is a game-changer for the medical industry, especially in limited spaces. Environmental sensors can be installed in small rooms to monitor an individual's health. Through low-cost sensors, as the core of the IoT physical layer, the RF (Radio Frequency) identification technique is advanced enough to facilitate personal healthcare. Recently, RFID technology has been utilized in the healthcare sector to enhance accurate data collection through various software systems. Steganography is a method that makes user data more secure than it has ever been before. The necessity of upholding secrecy in the widely used healthcare system will be covered in this solution. Health monitoring sensors are a crucial tool for analyzing real-time data and developing the medical box, an innovative solution that provides patients with access to medical assistance. By monitoring patients remotely, healthcare professionals can provide prompt medical attention whenever needed while ensuring patients' privacy and personal information are protected.

[...] Read more.
A PRISMA-driven Review of Speech Recognition based on English, Mandarin Chinese, Hindi and Urdu Language

By Muhammad Hazique Khatri Humera Tariq Maryam Feroze Ebad Ali Zeeshan Anjum Junaidi

DOI: https://doi.org/10.5815/ijitcs.2024.03.04, Pub. Date: 8 Jun. 2024

Urdu Language ranks ten and is continuously progressing. This unique PRISMA-Driven review deeply investigates Urdu speech recognition literature and adjoin it with English, Mandarin Chinese, and Hindi languages frame-works conceptualizing wider global perspective. The main objective is to unify progress on classical Artificially Intelligent (AI) and recent Deep Neural Networks (DNN) based speech recognition pipeline encompassing Dataset challenges, Feature extraction methods, Experimental design and the smooth integration with both Acoustic models (AM) and Language models (LM) using Transcriptions. A total of 176 articles were extracted from Google Scholar database for each language with custom query design. Inclusion criteria and quality assessment leads to end up with 5 review and 42 research articles. Comparative research questions have been addressed and findings were organized by four possible speech types: Isolated, connected, continuous and spontaneous. The finding shows that English, Mandarin, and Hindi languages used spontaneous speech size of 300, 200 and 1108 hours respectively which is quite remarkable as compared to Urdu spontaneous speech data size of only 9.5 hours.  For the same data size reason, the Word Error Rate (WER) for English falls below 5% while for Mandarin Chinese the alternative metric Character Error Rate (CER) is mostly used that lies below 25%. The success of English and Chinese Speech recognition leads to incomparable accuracy due to wide use of DNNs like Conformer, Transformers, E2E-attention in comparison to conventional feature extraction and AI models LSTM, TDNN, RNN, HMM, GMM-HMM; used frequently by both Hindi and Urdu.

[...] Read more.
Fundamental Frequency Extraction by Utilizing Accumulated Power Spectrum based Weighted Autocorrelation Function in Noisy Speech

By Nargis Parvin Moinur Rahman Irana Tabassum Ananna Md. Saifur Rahman

DOI: https://doi.org/10.5815/ijitcs.2024.03.05, Pub. Date: 8 Jun. 2024

This research suggests an efficient idea that is better suited for speech processing applications for retrieving the accurate pitch from speech signal in noisy conditions. For this objective, we present a fundamental frequency extraction algorithm and that is tolerant to the non-stationary changes of the amplitude and frequency of the input signal. Moreover, we use an accumulated power spectrum instead of power spectrum, which uses the shorter sub-frames of the input signal to reduce the noise characteristics of the speech signals. To increase the accuracy of the fundamental frequency extraction we have concentrated on maintaining the speech harmonics in their original state and suppressing the noise elements involved in the noisy speech signal. The two stages that make up the suggested fundamental frequency extraction approach are producing the accumulated power spectrum of the speech signal and weighting it with the average magnitude difference function. As per the experiment results, the proposed technique appears to be better in noisy situations than other existing state-of-the-art methods such as Weighted Autocorrelation Function (WAF), PEFAC, and BaNa.

[...] Read more.
Advanced Deep Learning Models for Accurate Retinal Disease State Detection

By Hossein. Abbasi Ahmed. Alshaeeb Yasin. Orouskhani Behrouz. Rahimi Mostafa. Shomalzadeh

DOI: https://doi.org/10.5815/ijitcs.2024.03.06, Pub. Date: 8 Jun. 2024

Retinal diseases are a significant challenge in the realm of medical diagnosis, with potential complications to vision and overall ocular health. This research endeavors to address the challenge of automating the detection of retinal disease states using advanced deep learning models, including VGG-19, ResNet-50, InceptionV3, and EfficientNetV2. Each model leverages transfer learning, drawing insights from a substantial dataset comprising optical coherence tomography (OCT) images and subsequently classifying images into four distinct retinal conditions: choroidal neovascularization, drusen, diabetic macular edema and a healthy state. The training dataset, sourced from repositories that are available to the public including OCT retinal images, spanning all four disease categories. Our findings reveal that among the models tested, EfficientNetV2 demonstrates superior performance, with a remarkable classification accuracy of 98.92%, precision of 99.6%, and a recall of 99.4%, surpassing the performance of the other models.

[...] Read more.
Python Data Analysis and Visualization in Java GUI Applications Through TCP Socket Programming

By Bala Dhandayuthapani V.

DOI: https://doi.org/10.5815/ijitcs.2024.03.07, Pub. Date: 8 Jun. 2024

Python is popular in artificial intelligence (AI) and machine learning (ML) due to its versatility, adaptability, rich libraries, and active community. The existing Python interoperability in Java was investigated using socket programming on a non-graphical user interface (GUI). Python's data analysis library modules such as numpy, pandas, and scipy, together with visualization library modules such as Matplotlib and Seaborn, and Scikit-learn for machine-learning, aim to integrate into Java graphical user interface (GUI) applications such as Java applets, Java Swing, and Java FX. The substantial method used in the integration process is TCP socket programming, which makes instruction and data transfers to provide interoperability between Python and Java GUIs. This empirical research integrates Python data analysis and visualization graphs into Java applications and does not require any additional libraries or third-party libraries. The experimentation confirmed the advantages and challenges of this integration with a concrete solution. The intended audience for this research extends to software developers, data analysts, and scientists, recognizing Python's broad applicability to artificial intelligence (AI) and machine learning (ML). The integration of data analysis and visualization and machine-learning functionalities within the Java GUI. It emphasizes the self-sufficiency of the integration process and suggests future research directions, including comparative analysis with Java's native capabilities, interactive data visualization using libraries like Altair, Bokeh, Plotly, and Pygal, performance and security considerations, and no-code and low-code implementations.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
An Efficient Algorithm for Density Based Subspace Clustering with Dynamic Parameter Setting

By B.Jaya Lakshmi K.B.Madhuri M.Shashi

DOI: https://doi.org/10.5815/ijitcs.2017.06.04, Pub. Date: 8 Jun. 2017

Density based Subspace Clustering algorithms have gained their importance owing to their ability to identify arbitrary shaped subspace clusters. Density-connected SUBspace CLUstering(SUBCLU) uses two input parameters namely epsilon and minpts whose values are same in all subspaces which leads to a significant loss to cluster quality. There are two important issues to be handled. Firstly, cluster densities vary in subspaces which refers to the phenomenon of density divergence. Secondly, the density of clusters within a subspace may vary due to the data characteristics which refers to the phenomenon of multi-density behavior. To handle these two issues of density divergence and multi-density behavior, the authors propose an efficient algorithm for generating subspace clusters by appropriately fixing the input parameter epsilon. The version1 of the proposed algorithm computes epsilon dynamically for each subspace based on the maximum spread of the data. To handle data that exhibits multi-density behavior, the algorithm is further refined and presented in version2. The initial value of epsilon is set to half of the value resulted in the version1 for a subspace and a small step value 'delta' is used for finalizing the epsilon separately for each cluster through step-wise refinement to form multiple higher dimensional subspace clusters. The proposed algorithm is implemented and tested on various bench-mark and synthetic datasets. It outperforms SUBCLU in terms of cluster quality and execution time.

[...] Read more.
A Systematic Review of Natural Language Processing in Healthcare

By Olaronke G. Iroju Janet O. Olaleke

DOI: https://doi.org/10.5815/ijitcs.2015.08.07, Pub. Date: 8 Jul. 2015

The healthcare system is a knowledge driven industry which consists of vast and growing volumes of narrative information obtained from discharge summaries/reports, physicians case notes, pathologists as well as radiologists reports. This information is usually stored in unstructured and non-standardized formats in electronic healthcare systems which make it difficult for the systems to understand the information contents of the narrative information. Thus, the access to valuable and meaningful healthcare information for decision making is a challenge. Nevertheless, Natural Language Processing (NLP) techniques have been used to structure narrative information in healthcare. Thus, NLP techniques have the capability to capture unstructured healthcare information, analyze its grammatical structure, determine the meaning of the information and translate the information so that it can be easily understood by the electronic healthcare systems. Consequently, NLP techniques reduce cost as well as improve the quality of healthcare. It is therefore against this background that this paper reviews the NLP techniques used in healthcare, their applications as well as their limitations.

[...] Read more.
Software Implementation of Modular Reduction by Pseudo-mersenne Primes

By Mariia Kovtun Vladyslav Kovtun Oleksandr Stokipnyi Andrew Okhrimenko

DOI: https://doi.org/10.5815/ijitcs.2023.04.01, Pub. Date: 8 Aug. 2023

Modern cryptosystems allow the use of operation in prime fields with special kind of modules that can speed up the prime field operation: multiplication, squaring, exponentiation. The authors took into account in the optimizations: the CPU architecture and the multiplicity of the degree of the modulus in relation to the machine word width. As example, shown adopted module reduction algorithms hard-coded for modern CPU in special form of pseudo-Mersenne prime used in MAC algorithm Poly1305, - in electronic signature algorithm EdDSA and - in short message encryption algorithm DSTU 9041. These algorithms have been software implemented on both 32-bit and 64-bit platforms and compared with Barrett modular reduction algorithm for different pseudo-Mersenne and generalized-Mersenne modules. Timings for proposed and Barrett algorithms for different modules are presented and discussed.

[...] Read more.
Forecasting Stock Market Trend using Machine Learning Algorithms with Technical Indicators

By Partho Protim Dey Nadia Nahar B M Mainul Hossain

DOI: https://doi.org/10.5815/ijitcs.2020.03.05, Pub. Date: 8 Jun. 2020

Stock market prediction is a process of trying to decide the stock trends based on the analysis of historical data. However, the stock market is subject to rapid changes. It is very difficult to predict because of its dynamic & unpredictable nature. The main goal of this paper is to present a model that can predict stock market trend. The model is implemented with the help of machine learning algorithms using eleven technical indicators. The model is trained and tested by the published stock data obtained from DSE (Dhaka Stock Exchange, Bangladesh). The empirical result reveals the effectiveness of machine learning techniques with a maximum accuracy of 86.67%, 64.13% and 69.21% for “today”, “tomorrow” and “day_after_tomorrow”.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
Incorporating Preference Changes through Users’ Input in Collaborative Filtering Movie Recommender System

By Abba Almu Aliyu Ahmad Abubakar Roko Mansur Aliyu

DOI: https://doi.org/10.5815/ijitcs.2022.04.05, Pub. Date: 8 Aug. 2022

The usefulness of Collaborative filtering recommender system is affected by its ability to capture users' preference changes on the recommended items during recommendation process. This makes it easy for the system to satisfy users' interest over time providing good and quality recommendations. The Existing system studied fails to solicit for user inputs on the recommended items and it is also unable to incorporate users' preference changes with time which lead to poor quality recommendations. In this work, an Enhanced Movie Recommender system that recommends movies to users is presented to improve the quality of recommendations. The system solicits for users' inputs to create a user profiles. It then incorporates a set of new features (such as age and genre) to be able to predict user's preference changes with time. This enabled it to recommend movies to the users based on users new preferences. The experimental study conducted on Netflix and Movielens datasets demonstrated that, compared to the existing work, the proposed work improved the recommendation results to the users based on the values of Precision and RMSE obtained in this study which in turn returns good recommendations to the users.

[...] Read more.
A Fast Topological Parallel Algorithm for Traversing Large Datasets

By Thiago Nascimento Rodrigues

DOI: https://doi.org/10.5815/ijitcs.2023.01.01, Pub. Date: 8 Feb. 2023

This work presents a parallel implementation of a graph-generating algorithm designed to be straightforwardly adapted to traverse large datasets. This new approach has been validated in a correlated scenario known as the word ladder problem. The new parallel algorithm induces the same topological structure proposed by its serial version and also builds the shortest path between any pair of words to be connected by a ladder of words. The implemented parallelism paradigm is the Multiple Instruction Stream - Multiple Data Stream (MIMD) and the test suite embraces 23-word ladder instances whose intermediate words were extracted from a dictionary of 183,719 words (dataset). The word morph quality (the shortest path between two input words) and the word morph performance (CPU time) were evaluated against a serial implementation of the original algorithm. The proposed parallel algorithm generated the optimal solution for each pair of words tested, that is, the minimum word ladder connecting an initial word to a final word was found. Thus, there was no negative impact on the quality of the solutions comparing them with those obtained through the serial ANG algorithm. However, there was an outstanding improvement considering the CPU time required to build the word ladder solutions. In fact, the time improvement was up to 99.85%, and speedups greater than 2.0X were achieved with the parallel algorithm.

[...] Read more.
Detecting and Preventing Common Web Application Vulnerabilities: A Comprehensive Approach

By Najla Odeh Sherin Hijazi

DOI: https://doi.org/10.5815/ijitcs.2023.03.03, Pub. Date: 8 Jun. 2023

Web applications are becoming very important in our lives as many sensitive processes depend on them. Therefore, it is critical for safety and invulnerability against malicious attacks. Most studies focus on ways to detect these attacks individually. In this study, we develop a new vulnerability system to detect and prevent vulnerabilities in web applications. It has multiple functions to deal with some recurring vulnerabilities. The proposed system provided the detection and prevention of four types of vulnerabilities, including SQL injection, cross-site scripting attacks, remote code execution, and fingerprinting of backend technologies. We investigated the way worked for every type of vulnerability; then the process of detecting each type of vulnerability; finally, we provided prevention for each type of vulnerability. Which achieved three goals: reduce testing costs, increase efficiency, and safety. The proposed system has been validated through a practical application on a website, and experimental results demonstrate its effectiveness in detecting and preventing security threats. Our study contributes to the field of security by presenting an innovative approach to addressing security concerns, and our results highlight the importance of implementing advanced detection and prevention methods to protect against potential cyberattacks. The significance and research value of this survey lies in its potential to enhance the security of online systems and reduce the risk of data breaches.

[...] Read more.
A Trust Management System for the Nigerian Cyber-health Community

By Ifeoluwani Jenyo Elizabeth A. Amusan Justice O. Emuoyibofarhe

DOI: https://doi.org/10.5815/ijitcs.2023.01.02, Pub. Date: 8 Feb. 2023

Trust is a basic requirement for the acceptance and adoption of new services related to health care, and therefore, vital in ensuring that the integrity of shared patient information among multi-care providers is preserved and that no one has tampered with it. The cyber-health community in Nigeria is in its infant stage with health care systems and services being mostly fragmented, disjointed, and heterogeneous with strong local autonomy and distributed among several healthcare givers platforms. There is the need for a trust management structure for guaranteed privacy and confidentiality to mitigate vulnerabilities to privacy thefts. In this paper, we developed an efficient Trust Management System that hybridized Real-Time Integrity Check (RTIC) and Dynamic Trust Negotiation (DTN) premised on the Confidentiality, Integrity, and Availability (CIA) model of information security. This was achieved through the design and implementation of an indigenous and generic architectural framework and model for a secured Trust Management System with the use of the advanced encryption standard (AES-256) algorithm for securing health records during transmission. The developed system achieved Reliabity score, Accuracy and Availability of 0.97, 91.30% and 96.52% respectively.

[...] Read more.
Development of IoT Cloud-based Platform for Smart Farming in the Sub-saharan Africa with Implementation of Smart-irrigation as Test-Case

By Supreme A. Okoh Elizabeth N. Onwuka Bala A. Salihu Suleiman Zubairu Peter Y. Dibal Emmanuel Nwankwo

DOI: https://doi.org/10.5815/ijitcs.2023.02.01, Pub. Date: 8 Apr. 2023

UN Department of Economics and Social Affairs predicted that the world population will increase by 2 billion in 2050 with over 50% from the Sub-Saharan Africa (SSA). Considering the level of poverty and food insecurity in the region, there is an urgent need for a sustainable increase in agricultural produce. However, farming approach in the region is primarily traditional. Traditional farming is characterized by high labor costs, low production, and under/oversupply of farm inputs. All these factors make farming unappealing to many. The use of digital technologies such as broadband, Internet of Things (IoT), Cloud computing, and Big Data Analytics promise improved returns on agricultural investments and could make farming appealing even to the youth. However, initial cost of smart farming could be high. Therefore, development of a dedicated IoT cloud-based platform is imperative. Then farmers could subscribe and have their farms managed on the platform. It should be noted that majority of farmers in SSA are smallholders who are poor, uneducated, and live in rural areas but produce about 80% of the food. They majorly use 2G phones, which are not internet enabled. These peculiarities must be factored into the design of any functional IoT platform that would serve this group. This paper presents the development of such a platform, which was tested with smart irrigation of maize crops in a testbed. Besides the convenience provided by the smart system, it recorded irrigation water saving of over 36% compared to the control method which demonstrates how irrigation is done traditionally.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
Comparative Analysis of Multiple Sequence Alignment Tools

By Eman M. Mohamed Hamdy M. Mousa Arabi E. keshk

DOI: https://doi.org/10.5815/ijitcs.2018.08.04, Pub. Date: 8 Aug. 2018

The perfect alignment between three or more sequences of Protein, RNA or DNA is a very difficult task in bioinformatics. There are many techniques for alignment multiple sequences. Many techniques maximize speed and do not concern with the accuracy of the resulting alignment. Likewise, many techniques maximize accuracy and do not concern with the speed. Reducing memory and execution time requirements and increasing the accuracy of multiple sequence alignment on large-scale datasets are the vital goal of any technique. The paper introduces the comparative analysis of the most well-known programs (CLUSTAL-OMEGA, MAFFT, BROBCONS, KALIGN, RETALIGN, and MUSCLE). For programs’ testing and evaluating, benchmark protein datasets are used. Both the execution time and alignment quality are two important metrics. The obtained results show that no single MSA tool can always achieve the best alignment for all datasets.

[...] Read more.