Geraldine Bessie Amali. D

Work place: School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India

E-mail: geraldine.amali@vit.ac.in

Website:

Research Interests: Computer systems and computational processes, Computational Learning Theory, Data Structures and Algorithms, Combinatorial Optimization

Biography

Geraldine Bessie Amali received her M.Tech. degree in Computer Science and Engineering from VIT University in 2014 where she received gold medal for graduating at the top of her batch. She also has a university rank in Master of Computer Applications from Bharathidasan University. She is currently working as an Assistant Professor at VIT University and has more than 7 years experience teaching computer science. Her research interests include machine learning and biologically inspired optimization algorithms.

Author Articles
A New Quantum Tunneling Particle Swarm Optimization Algorithm for Training Feedforward Neural Networks

By Geraldine Bessie Amali. D Dinakaran. M

DOI: https://doi.org/10.5815/ijisa.2018.11.07, Pub. Date: 8 Nov. 2018

In this paper a new Quantum Tunneling Particle Swarm Optimization (QTPSO) algorithm is proposed and applied to the training of feedforward Artificial Neural Networks (ANNs). In the classical Particle Swarm Optimization (PSO) algorithm the value of the cost function at the location of the personal best solution found by each particle cannot increase. This can significantly reduce the explorative ability of the entire swarm. In this paper a new PSO algorithm in which the personal best solution of each particle is allowed to tunnel through hills in the cost function analogous to the Tunneling effect in Quantum Physics is proposed. In quantum tunneling a particle which has insufficient energy to cross a potential barrier can still cross the barrier with a small probability that exponentially decreases with the barrier length. The introduction of the quantum tunneling effect allows particles in the PSO algorithm to escape from local minima thereby increasing the explorative ability of the PSO algorithm and preventing premature convergence to local minima. The proposed algorithm significantly outperforms three state-of-the-art PSO variants on a majority of benchmark neural network training problems.

[...] Read more.
Other Articles