Evaluation of Different Machine Learning Methods for Caesarean Data Classification

Full Text (PDF, 732KB), PP.19-23

Views: 0 Downloads: 0

Author(s)

O.S.S. Alsharif 1,* K.M. Elbayoudi 1 A.A.S. Aldrawi 1 K. Akyol 2

1. Department of Material Science and Engineering, Kastamonu University, Kastamonu, Turkey

2. Department of Computer Engineering, Kastamonu University, Kastamonu, Turkey

* Corresponding author.

DOI: https://doi.org/10.5815/ijieeb.2019.05.03

Received: 17 Jul. 2019 / Revised: 28 Jul. 2019 / Accepted: 5 Aug. 2019 / Published: 8 Sep. 2019

Index Terms

Caesarean data, machine learning, Decision Tree, K-Nearest- Neighbours, Naïve Bayes, Support Vector Machine, Random Forest Classifier.

Abstract

Recently, a new dataset has been introduced about the caesarean data. In this paper, the caesarean data was classified with five different algorithms; Support Vector Machine, K Nearest Neighbours, Naïve Bayes, Decision Tree Classifier, and Random Forest Classifier. The dataset is retrieved from California University website. The main objective of this study is to compare selected algorithms’ performances. This study has shown that the best accuracy that was for Naïve Bayes while the highest sensitivity which was for Support Vector Machine.

Cite This Paper

O.S.S. Alsharif, K.M. Elbayoudi, A.A.S. Aldrawi, K. Akyol, "Evaluation of Different Machine Learning Methods for Caesarean Data Classification", International Journal of Information Engineering and Electronic Business(IJIEEB), Vol.11, No.5, pp. 19-23, 2019. DOI:10.5815/ijieeb.2019.05.03

Reference

[1]Nilsson N.J. (2019). Introduction to Machine Learning: An Early Draft of a Proposed Textbook Robotics Laboratory, Department of Computer Science, Stanford University, (Access Time: January, 2019).
[2]Jain A.K., Murty M.N. and Flynn P.J. (1999). Data clustering: a review, ACM computing surveys, Volume 31, 1999, pp. 264-323.
[3]Alpaydin E. (2014). Introduction to machine learning, MIT press.
[4]Jakkula V. (2006). Tutorial on support vector machine,” School of EECS, Washington State University.
[5]Sutton O., (2012). Introduction to k nearest neighbour classification and condensed nearest neighbour data reduction, University lectures, University of Leicester.
[6]Jain A.K. (2010). Data clustering: 50 years beyond K-means, Pattern recognition letters, Volume 31, pp. 651-666.
[7]Domingos P. and Pazzani M. (1997). On the optimality of the simple Bayesian classifier under zero-one loss, Machine learning, Volume 29, pp.103-130.
[8]Friedman N., Geiger D. and Goldszmidt M. (1997). Bayesian network classifiers, Machine learning, Volume 29, pp. 131-163.
[9]Mitchell T.M. (1997). Machine learning, McGraw Hill Series in Computer Science, Volume 45, pp. 870-877.
[10]Myles A.J., Feudale R.N., Liu Y., Woody N.A. and Brown S.D. (2004). An introduction to decision tree modeling, Journal of Chemometrics: A Journal of the Chemometrics Society, Volume 18, pp. 275-285.
[11]Breiman, L. (2001). Random forests, Machine learning, Volume 45, pp. 5-32.