International Journal of Information Technology and Computer Science (IJITCS)

IJITCS Vol. 11, No. 8, Aug. 2019

Cover page and Table of Contents: PDF (size: 230KB)

Table Of Contents

REGULAR PAPERS

An Implementation of the Finite Differences Method for the Two-Dimensional Rectangular Cooling Fin Problem

By Thiago N. Rodrigues

DOI: https://doi.org/10.5815/ijitcs.2019.08.01, Pub. Date: 8 Aug. 2019

The transport or advection-diffusion-reaction equation is a well-known partial differential equation employed to model several types of flux problems. The cooling fin problem is a particular case of such an equation. This work presents a straightforward model for the rectangular cooling fin in a problem. The model was based on the finite differences numerical method and an efficient implementation was developed in a high-level mathematical programming language.  The accuracy was evaluated with different granularity levels of meshes, and two distinct boundary conditions are compared. In the first one, only prescribed temperatures are assumed at the four tips of the domain. For the second scenario, it is assumed a heat flux at one tip of a fin with the same geometrical shape. The achieved solutions produced by the algorithm were able to depict the temperature along the whole fin surface accurately. Furthermore, the algorithm reaches relevant performance for meshes up to 4257 points where the CPU time was about 33 seconds.

[...] Read more.
A Parallel Evolutionary Search for Shortest Vector Problem

By Gholam Reza Moghissi Ali Payandeh

DOI: https://doi.org/10.5815/ijitcs.2019.08.02, Pub. Date: 8 Aug. 2019

The hardness assumption of approximate shortest vector problem (SVP) within the polynomial factor in polynomial time reduced to the security of many lattice-based cryptographic primitives, so solving this problem, breaks these primitives. In this paper, we investigate the suitability of combining the best techniques in general search/optimization, lattice theory and parallelization technologies for solving the SVP into a single algorithm. Our proposed algorithm repeats three steps in a loop: an evolutionary search (a parallelized Genetic Algorithm), brute-force of tiny full enumeration (in role of too much local searches with random start points over the lattice vectors) and a single main enumeration.  The test results showed that our proposed algorithm is better than LLL reduction and may be worse than the BKZ variants (except some so small block sizes). The main drawback for these test results is the not-sufficient tuning of various parameters for showing the potential strength of our contribution. Therefore, we count the entire main problems and weaknesses in our work for clearer and better results in further studies. Also it is proposed a pure model of Genetic Algorithm with more solid/stable design for SVP problem which can be inspired by future works.

[...] Read more.
Ensemble Approach for Twitter Sentiment Analysis

By Dimple Tiwari Nanhay Singh

DOI: https://doi.org/10.5815/ijitcs.2019.08.03, Pub. Date: 8 Aug. 2019

Due to enlargement of social network and online marketing websites. The Blogs and reviews of the user are acquired from these websites. And these become useful for analysis and Decision making for various types of products, marketing and movie etc. with the extent of the usefulness of social Reviews. It is to be needed carefully analysis of that data. There are various techniques and methods are available that can accurately analyses the social information and provides greater accuracy for the analysis. But one of the major issues available with the social media data is that data is unstructured and noisy. It is to be required to solve this problem. So here in this paper a framework is proposed that includes latest data preprocessing techniques instead of noise removal like stemming, Lemmatization and Tokenization. After Pre-Processing of data ensemble methods is applied that increase the accuracy of previous classification algorithms. This method is inherent from bagging concept. First apply Decision Tree, Kneighbor and Naive Bayes classifier that not provide batter accuracy after that boosting concept is applied with the help of AdaBoost method that improves the accuracy of previous classical classifiers. At last our proposed ensemble method ExtraTree classifier is applied that inherent from bagging concept. Here we use the Extra Tree classifier that take the various sample are taken from training set and various random trees are created. It is also called as extremely randomized tree that provides extreme refined view. So that, it is to be conveying that The ExtraTree classifier of bagging ensemble method outperforms than all other techniques that are previously applied in this paper. with using some novel pre-processing techniques data that produced is more refined and that provides clean and pure base for the implementation of ensemble techniques. And also contributes in improving the accuracy of the applied methods.

[...] Read more.
MeDevice: A Mobile – Based Diagnosis of Common Human Illnesses using Neuro – Fuzzy Expert System

By Johaira U. Lidasan Martina P. Tagacay

DOI: https://doi.org/10.5815/ijitcs.2019.08.04, Pub. Date: 8 Aug. 2019

Fever is a sign that the body is trying to fight infection. It is usually accompanied by various sicknesses or symptoms that signal another illness or disease. Diagnosing it ahead of time is essential because it has to do with human life and to determine what to do to get well. MeDevice is a mobile-based application that runs in Android devices that allows the user to enter the levels of his/her symptoms and diagnoses the disease either as influenza, dengue, chicken pox, malaria, typhoid fever, measles, Hepatitis A and pneumonia together with its details and its first aid treatment. It aims at providing an efficient decision support platform to aid people with fever in diagnosing their disease and whether or not to seek medical attention especially in developing countries like the Philippines. This application is engineered with the knowledge base and the inference method of fuzzy logic and expert system with the help of Gradient Descent optimization algorithm and back propagation neural network to achieve the optimum value of the error rate. This is essential to provide the application with a high accuracy rate which shows during the conduct of testing of the application.

[...] Read more.
Application of an Enhanced Self-adapting Differential Evolution Algorithm to Workload Prediction in Cloud Computing

By M. A. Attia M. Arafa E. A. Sallam M. M. Fahmy

DOI: https://doi.org/10.5815/ijitcs.2019.08.05, Pub. Date: 8 Aug. 2019

The demand for workload prediction approaches has recently increased to manage the cloud resources, improve the performance of the cloud services and reduce the power consumption. The prediction accuracy of these approaches affects the cloud performance. In this application paper, we apply an enhanced variant of the differential evolution (DE) algorithm named MSaDE as a learning algorithm to the artificial neural network (ANN) model of the cloud workload prediction. The ANN prediction model based on MSaDE algorithm is evaluated over two benchmark datasets for the workload traces of NASA server and Saskatchewan server at different look-ahead times. To show the improvement in accuracy of training the ANN prediction model using MSaDE algorithm, training is performed with other two algorithms: the back propagation (BP) algorithm and the self-adaptive differential evolution (SaDE) algorithm. Comparisons are made in terms of the root mean squared error (RMSE) and the average root mean squared error (ARMSE) through all prediction intervals. The results show that the ANN prediction model based on the MSaDE algorithm predicts the cloud workloads with higher prediction accuracy than the other algorithms compared with.

[...] Read more.
Performance Analysis of LT Codec Architecture Using Different Processor Templates

By S. M. Shamsul Alam

DOI: https://doi.org/10.5815/ijitcs.2019.08.06, Pub. Date: 8 Aug. 2019

Luby Transform (LT) code plays a vital role in binary erasure channel. This paper portrays the design techniques for implementation of LT codec using application specific instruction set processor (ASIP) design tools. In ASIP design, a common approach to increase the performance of processors is to boost the number of concurrent operations. Therefore, optimizations like strategy of input design, processor and compiler architecture are very useful phenomenon to enhance the performance of the application specific processor. Using Tensilica and OpenRISC processor design tools, this paper shows the response of LT codec architectures in terms of cycle counts and simulating time. Result shows that, the simulation speed of Tensilica is very high compared to the OpenRisc tool. Among different configurations of Tensilica tool, proposed ConnXD2 design took 1 M cycles per second and 135.66 ms to execute LT codec processor and XRC_D2MR configuration consumed only 9 iterations for successful decoding of LT encoded signal. Besides this, OpenRisc tool took 142K cycles and 6ms for executing LT encoder.

[...] Read more.