Work place: National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”
E-mail: lyushenkol@gmail.com
Website:
Research Interests: Software Engineering, Computational Learning Theory, Mathematical Analysis, Mathematical Software
Biography
Lyushenko Lesya is an associate professor of the Faculty of Applied Mathematics of the National Technical University of Ukraine, “Igor Sikorsky Kyiv Polytechnic Institute”, Ph.D. of mathematical simulation in scientific research. The dissertation’s name is “Development of the mathematical models of mainline power systems for on-line control automatization” (Ukrainian National Academy of Sciences. Institute of Simulation Problems in Power Engineering). Research interests include software engineering, mathematical modeling, machine learning, IT start-up project, etc. She is an author of research studies published in national and international journals as well as conference proceedings.
By Zhengbing Hu Mykhailo Ivashchenko Lesya Lyushenko Dmytro Klyushnyk
DOI: https://doi.org/10.5815/ijmecs.2021.03.02, Pub. Date: 8 Jun. 2021
One of the trends in information technologies is implementing neural networks in modern software packages [1]. The fact that neural networks cannot be directly programmed (but trained) is their distinctive feature. In this regard, the urgent task is to ensure sufficient speed and quality of neural network training procedures. The process of neural network training can differ significantly depending on the problem. There are verification methods that correspond to the task’s constraints; they are used to assess the training results. Verification methods provide an estimate of the entire cardinal set of examples but do not allow to estimate which subset of those causes a significant error. This fact leads to neural networks’ failure to perform with the given set of hyperparameters, making training a new one time-consuming.
On the other hand, existing empirical assessment methods of neural networks training use discrete sets of examples. With this approach, it is impossible to say that the network is suitable for classification on the whole cardinal set of examples.
This paper proposes a criterion for assessing the quality of classification results. The criterion is formed by describing the training states of the neural network. Each state is specified by the correspondence of the set of errors to the function range representing a cardinal set of test examples. The criterion usage allows tracking the network’s classification defects and marking them as safe or unsafe. As a result, it is possible to formally assess how the training and validation data sets must be altered to improve the network’s performance, while existing verification methods do not provide any information on which part of the dataset causes the network to underperform.
Subscribe to receive issue release notifications and newsletters from MECS Press journals