Yousef S. Alsenani

Work place: Computer Skills Unite, King Abdualziz University, Jeddah, 21571, Saudi Arabia

E-mail: yalsenani@kau.edu.sa

Website:

Research Interests:

Biography

Yousef S. Alsenani born in Saudi Arabia. Alsenani received Master’s degree in Advance Computer Science from California Lutheran University, Thousand Oaks, USA, 2013. He works as lecture at King Abdualziz University, Saudi Arabia. Now he is a PhD student at Southern University Illinois Carbondale, USA. His current research interest is Cloud Computing.

Author Articles
Inverse Matrix using Gauss Elimination Method by OpenMP

By Madini O. Alassafi Yousef S. Alsenani

DOI: https://doi.org/10.5815/ijitcs.2016.02.05, Pub. Date: 8 Feb. 2016

OpenMP is an implementation program interface that might be utilized to explicitly immediate multi-threaded and it shared memory parallelism. OpenMP platform for specifications multi-processing via concurrent work between interested parties of hardware and software industry, governments and academia. OpenMP is not needs implemented identically by all vendors and it is not proposed for distributed memory parallel systems by itself. In order to invert a matrix, there are multiple approaches. The proposed LU decomposition calculates the upper and lower triangular via Gauss elimination method. The computation can be parallelized using OpenMP technology. The proposed technique main goal is to analyze the amount of time taken for different sizes of matrices so we used 1 thread, 2 threads, 4 threads, and 8 threads which will be compared against each other to measure the efficiency of the parallelization. The result of interrupting compered the amount of time spent in all the computing using 1 thread, 2 threads, 4 threads, and 8 threads. We came up with if we raise the number of threads the performance will be increased (less amount of time required). If we use 8 threads we get around 64% performance gained. Also as the size of matrix increases, the efficiency of parallelization also increases, which is evident from the time difference between serial and parallel code. This is because, more computations are done parallel and hence the efficiency is high. Schedule type in OpenMP has different behavior, we used static, dynamic, and guided scheme.

[...] Read more.
Other Articles