A Multiclass Approach to Estimating Software Vulnerability Severity Rating with Statistical and Word Embedding Methods

Full Text (PDF, 401KB), PP.27-42

Views: 0 Downloads: 0

Author(s)

Hakan Kekul 1,2,* Burhan Ergen 3 Halil ARSLAN 4

1. University of Fırat, Institute of Science, Elazığ Turkey

2. Sivas Information Technology Technical High School, Diriliş Mahallesi Rüzgarli Sokak No 21 Sivas, Turkey

3. University of Fırat, Faculty of Engineering, Computer Engineering Department, Elazığ Turkey

4. University of Sivas Cumhuriyet, Faculty of Engineering, Computer Engineering Department, Sivas Turkey

* Corresponding author.

DOI: https://doi.org/10.5815/ijcnis.2022.04.03

Received: 28 Jan. 2022 / Revised: 1 Mar. 2022 / Accepted: 23 Mar. 2022 / Published: 8 Aug. 2022

Index Terms

Software Security, Software Vulnerability, Information security, Text Analysis, Multiclass Classification

Abstract

The analysis and grading of software vulnerabilities is an important process that is done manually by experts today. For this reason, there are time delays, human errors, and excessive costs involved with the process. The final result of these software vulnerability reports created by experts is the calculation of a severity score and a severity rating. The severity rating is the first and foremost value of the software’s vulnerability. The vulnerabilities that can be exploited are only 20% of the total vulnerabilities. The vast majority of exploitations take place within the first two weeks. It is therefore imperative to determine the severity rating without time delays. Our proposed model uses statistical methods and deep learning-based word embedding methods from natural language processing techniques, and machine learning algorithms that perform multi-class classification. Bag of Words, Term Frequency Inverse Document Frequency and Ngram methods, which are statistical methods, were used for feature extraction. Word2Vec, Doc2Vec and Fasttext algorithms are included in the study for deep learning based Word embedding. In the classification stage, Naive Bayes, Decision Tree, K-Nearest Neighbors, Multi-Layer Perceptron, and Random Forest algorithms that can make multi-class classification were preferred. With this aspect, our model proposes a hybrid method. The database used is open to the public and is the most reliable data set in the field. The results obtained in our study are quite promising. By helping experts in this field, procedures will speed up. In addition, our study is one of the first studies containing the latest version of the data size and scoring systems it covers.

Cite This Paper

Hakan KEKÜL, Burhan ERGEN, Halil ARSLAN, "A Multiclass Approach to Estimating Software Vulnerability Severity Rating with Statistical and Word Embedding Methods", International Journal of Computer Network and Information Security(IJCNIS), Vol.14, No.4, pp.27-42, 2022. DOI:10.5815/ijcnis.2022.04.03

Reference

[1]S.M. Ghaffarian, H.R. Shahriari, Software vulnerability analysis and discovery using machine-learning and data-mining techniques: A survey, ACM Comput. Surv. 50 (2017). https://doi.org/10.1145/3092566.
[2]L.P. Kobek, The State of Cybersecurity in Mexico: An Overview, Wilson Centre’s Mex. Institute, Jan. (2017).
[3]T.W. Moore, C.W. Probst, K. Rannenberg, M. van Eeten, Assessing ICT Security Risks in Socio-Technical Systems (Dagstuhl Seminar 16461), Dagstuhl Reports. 6 (2017) 63–89. https://doi.org/10.4230/DagRep.6.11.63.
[4]CVE, CVE, Common Vulnerabilities Expo. (2020). https://cve.mitre.org (accessed July 25, 2020).
[5]Samuel Ndichu, Sylvester McOyowo, Henry Okoyo, Cyrus Wekesa, "A Remote Access Security Model based on Vulnerability Management", International Journal of Information Technology and Computer Science, Vol.12, No.5, pp.38-51, 2020.
[6]H. Kekül, B. Ergen, H. Arslan, Yazılım Güvenlik Açığı Veri Tabanları, Avrupa Bilim ve Teknol. Derg. (2021) 1008–1012.
[7]H. Yang, S. Park, K. Yim, M. Lee, Better not to use vulnerability’s reference for exploitability prediction, Appl. Sci. 10 (2020). https://doi.org/10.3390/app10072555.
[8]NVD, NVD, Natl. Vulnerability Database. (2020). https://nvd.nist.gov (accessed July 25, 2020).
[9]M. Schiffman, C.I.A.G. Cisco, A Complete Guide to the Common Vulnerability Scoring System (CVSS) v1 Archive, (2005). https://www.first.org/cvss/v1/guide (accessed January 1, 2021).
[10]P. Mell, K. Scarfone, S. Romanosky, A Complete Guide to the Common Vulnerability Scoring System Version 2.0, FIRSTForum Incid. Response Secur. Teams. (2007) 1–23. https://www.first.org/cvss/cvss-v2-guide.pdf (accessed January 1, 2021).
[11]Common Vulnerability Scoring System v3.0: User Guide, (n.d.). https://www.first.org/cvss/v3.0/user-guide (accessed January 1, 2021).
[12]Common Vulnerability Scoring System v3.1: User Guide, (n.d.). https://www.first.org/cvss/v3.1/user-guide (accessed January 1, 2021).
[13]Muhammad Noman Khalid, Muhammad iqbal, Kamran Rasheed, Malik Muneeb Abid, "Web Vulnerability Finder (WVF): Automated Black- Box Web Vulnerability Scanner", International Journal of Information Technology and Computer Science, Vol.12, No.4, pp.38-46, 2020.
[14]Hakan Kekül, Burhan Ergen, Halil Arslan, " A New Vulnerability Reporting Framework for Software Vulnerability Databases", International Journal of Education and Management Engineering, Vol.11, No.3, pp. 11-19, 2021.
[15]J. Ruohonen, A look at the time delays in CVSS vulnerability scoring, Appl. Comput. Informatics. 15 (2019) 129–135. https://doi.org/10.1016/j.aci.2017.12.002.
[16]G. Spanos, L. Angelis, A multi-target approach to estimate software vulnerability characteristics and severity scores, J. Syst. Softw. 146 (2018) 152–166. https://doi.org/10.1016/j.jss.2018.09.039.
[17]C. Theisen, L. Williams, Better together: Comparing vulnerability prediction models, Inf. Softw. Technol. 119 (2020). https://doi.org/10.1016/j.infsof.2019.106204.
[18]Y. Fang, Y. Liu, C. Huang, L. Liu, Fastembed: Predicting vulnerability exploitation possibility based on ensemble machine learning algorithm, PLoS One. 15 (2020) 1–28. https://doi.org/10.1371/journal.pone.0228439.
[19]F.D. János, N. Huu Phuoc Dai, Security Concerns Towards Security Operations Centers, in: 2018 IEEE 12th Int. Symp. Appl. Comput. Intell. Informatics, 2018: pp. 273–278. https://doi.org/10.1109/SACI.2018.8440963.
[20]K. Kritikos, K. Magoutis, M. Papoutsakis, S. Ioannidis, A survey on vulnerability assessment tools and databases for cloud-based web applications, Array. 3–4 (2019) 100011. https://doi.org/10.1016/j.array.2019.100011.
[21]X. Wu, W. Zheng, X. Chen, F. Wang, D. Mu, CVE-assisted large-scale security bug report dataset construction method, J. Syst. Softw. 160 (2020) 110456. https://doi.org/10.1016/j.jss.2019.110456.
[22]R. Sharma, R. Sibal, S. Sabharwal, Software vulnerability prioritization using vulnerability description, Int. J. Syst. Assur. Eng. Manag. 12 (2021) 58–64. https://doi.org/10.1007/s13198-020-01021-7.
[23]R. Malhotra, Vidushi, Severity Prediction of Software Vulnerabilities Using Textual Data, in: V.K. Gunjan, J.M. Zurada (Eds.), Proc. Int. Conf. Recent Trends Mach. Learn. IoT, Smart Cities Appl., Springer Singapore, Singapore, 2021: pp. 453–464.
[24]E.R. Russo, A. Di Sorbo, C.A. Visaggio, G. Canfora, Summarizing vulnerabilities’ descriptions to support experts during vulnerability assessment activities, J. Syst. Softw. 156 (2019) 84–99. https://doi.org/10.1016/j.jss.2019.06.001.
[25]E. Yasasin, J. Prester, G. Wagner, G. Schryen, Forecasting IT security vulnerabilities – An empirical analysis, Comput. Secur. 88 (2020) 101610. https://doi.org/10.1016/j.cose.2019.101610.
[26]M. Aota, H. Kanehara, M. Kubo, N. Murata, B. Sun, T. Takahashi, Automation of Vulnerability Classification from its Description using Machine Learning, in: 2020 IEEE Symp. Comput. Commun., 2020: pp. 1–7. https://doi.org/10.1109/ISCC50000.2020.9219568.
[27]I. Izonin, R. Tkachenko, M. Gregus, L. Ryvak, V. Kulyk, V. Chopyak, Hybrid Classifier via PNN-based Dimensionality Reduction Approach for Biomedical Engineering Task, Procedia Comput. Sci. 191 (2021) 230–237. https://doi.org/https://doi.org/10.1016/j.procs.2021.07.029.
[28]M.G.Z.D. 3 N.S. Ivan Izonin Roman Tkachenko, PNN-SVM Approach of Ti-Based Powder’s Properties Evaluation for Biomedical Implants Production, Comput. Mater. \& Contin. 71 (2022) 5933–5947. https://doi.org/10.32604/cmc.2022.022582.
[29]C.B. Şahin, Ö.B. Dinler, L. Abualigah, Prediction of software vulnerability based deep symbiotic genetic algorithms: Phenotyping of dominant-features, Appl. Intell. 51 (2021) 8271–8287. https://doi.org/10.1007/s10489-021-02324-3.
[30]D. Miyamoto, Y. Yamamoto, M. Nakayama, Text-mining approach for estimating vulnerability score, Proc. - 2015 4th Int. Work. Build. Anal. Datasets Gather. Exp. Returns Secur. BADGERS 2015. (2017) 67–73. https://doi.org/10.1109/BADGERS.2015.12.
[31]H. Kekül, B. Ergen, H. Arslan, A multiclass hybrid approach to estimating software vulnerability vectors and severity score, J. Inf. Secur. Appl. 63 (2021) 103028. https://doi.org/https://doi.org/10.1016/j.jisa.2021.103028.
[32]A. Kuehn, M. Mueller, Shifts in the cybersecurity paradigm: Zero-day exploits, discourse, and emerging institutions, in: Proc. 2014 New Secur. Paradig. Work., 2014: pp. 63–68.
[33]O. Bozoklu, C.Z. Çil, Yazılım Güvenlik Açığı Ekosistemi Ve Türkiye’deki Durum Değerlendirmesi, Uluslararası Bilgi Güvenliği Mühendisliği Derg. 3 (2017) 6–26.
[34]Mitre Corporation, (2020). https://www.mitre.org (accessed July 25, 2020).
[35]ExploitDB, Exploit Database, (2020). https://www.exploit-db.com (accessed July 25, 2020).
[36]SecurityFocus, SecurityFocus, (2020). https://www.securityfocus.com (accessed July 25, 2020).
[37]Rapid7, Rapid7, (2020). https://www.rapid7.com/db/ (accessed July 25, 2020).
[38]Snyk, Snyk, (2020). https://snyk.io (accessed July 25, 2020).
[39]SARD, SARD-Software Assurance Reference Dataset Project, (2020). https://samate.nist.gov (accessed July 25, 2020).
[40]V.-V. Patriciu, I. Priescu, S. Nicolaescu, Security metrics for enterprise information systems, J. Appl. Quant. Methods. 1 (2006) 151–159.
[41]A. Fesseha, S. Xiong, E.D. Emiru, M. Diallo, A. Dahou, Text Classification Based on Convolutional Neural Networks and Word Embedding for Low-Resource Languages: Tigrinya, Information. 12 (2021). https://doi.org/10.3390/info12020052.
[42]A.K. Uysal, S. Gunal, The impact of preprocessing on text classification, Inf. Process. Manag. 50 (2014) 104–112. https://doi.org/https://doi.org/10.1016/j.ipm.2013.08.006.
[43]A.A. Jalal, B.H. Ali, Text documents clustering using data mining techniques., Int. J. Electr. Comput. Eng. 11 (2021).
[44]K. Kowsari, K. Jafari Meimandi, M. Heidarysafa, S. Mendu, L. Barnes, D. Brown, Text classification algorithms: A survey, Information. 10 (2019) 150.
[45]Y. Zhang, R. Jin, Z.-H. Zhou, Understanding bag-of-words model: a statistical framework, Int. J. Mach. Learn. Cybern. 1 (2010) 43–52.
[46]A. Aizawa, An information-theoretic perspective of tf--idf measures, Inf. Process. Manag. 39 (2003) 45–65.
[47]S. Banerjee, T. Pedersen, The design, implementation, and use of the ngram statistics package, in: Int. Conf. Intell. Text Process. Comput. Linguist., 2003: pp. 370–381.
[48]M. Aydoğan, A. Karci, Turkish Text Classification with Machine Learning and Transfer Learning, in: 2019 Int. Artif. Intell. Data Process. Symp., 2019: pp. 1–6. https://doi.org/10.1109/IDAP.2019.8875919.
[49]T. Mikolov, I. Sutskever, K. Chen, G.S. Corrado, J. Dean, Distributed representations of words and phrases and their compositionality, in: Adv. Neural Inf. Process. Syst., 2013: pp. 3111–3119.
[50]T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, ArXiv Prepr. ArXiv1301.3781. (2013).
[51]P. Bojanowski, E. Grave, A. Joulin, T. Mikolov, Enriching word vectors with subword information, Trans. Assoc. Comput. Linguist. 5 (2017) 135–146.
[52]Z. Yin, Y. Shen, On the dimensionality of word embedding, ArXiv Prepr. ArXiv1812.04224. (2018).
[53]S. Aggarwal, D. Kaur, Naïve Bayes Classifier with Various Smoothing Techniques for Text Documents, in: 2013.
[54]L. Breiman, J. Friedman, C.J. Stone, R.A. Olshen, Classification and regression trees, CRC press, 1984.
[55]E. Fix, Discriminatory analysis: nonparametric discrimination, consistency properties, USAF school of Aviation Medicine, 1951.
[56]W.S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys. 5 (1943) 115–133.
[57]L. Breiman, Random Forests, Mach. Learn. 45 (2001) 5–32. https://doi.org/10.1023/A:1010933404324.
[58]R. Kohavi, others, A study of cross-validation and bootstrap for accuracy estimation and model selection, in: Ijcai, 1995: pp. 1137–1145.
[59]G.C. Cawley, N.L.C. Talbot, On over-fitting in model selection and subsequent selection bias in performance evaluation, J. Mach. Learn. Res. 11 (2010) 2079–2107.
[60]D. Ballabio, F. Grisoni, R. Todeschini, Multivariate comparison of classification performance measures, Chemom. Intell. Lab. Syst. 174 (2018) 33–44. https://doi.org/https://doi.org/10.1016/j.chemolab.2017.12.004.
[61]M. Sokolova, G. Lapalme, A systematic analysis of performance measures for classification tasks, Inf. Process. Manag. 45 (2009) 427–437. https://doi.org/https://doi.org/10.1016/j.ipm.2009.03.002.
[62]C. Bielza, G. Li, P. Larrañaga, Multi-dimensional classification with Bayesian networks, Int. J. Approx. Reason. 52 (2011) 705–727. https://doi.org/https://doi.org/10.1016/j.ijar.2011.01.007.