Grey wolf optimizer-based back-propagation neural network algorithm

Citation:
Hassanin, M. F., A. M. Shoeb, and A. E. Hassanien, "Grey wolf optimizer-based back-propagation neural network algorithm", 2016 12th International Computer Engineering Conference (ICENCO), , Cairo, 28-29 Dec, 2016.

Date Presented:

28-29 Dec

Abstract:

For many decades, artificial neural network (ANN) proves successful results in thousands of problems in many disciplines. Back-propagation (BP) is one of the candidate algorithms to train ANN. Due to the way of BP to find the solution for the underlying problem, there is an important drawback of it, namely the stuck in local minima rather than the global one. Recent studies introduce meta-heuristic techniques to train ANN. The current work proposes a framework in which grey wolf optimizer (GWO) provides the initial solution to a BP ANN. Five datasets are used to benchmark GWO BP performance with other competitors. The first competitor is an optimized BP ANN based on genetic algorithm. The second is a BP ANN powered by particle swarm optimizer. The third is the BP algorithm itself and lastly a feedforward ANN enhanced by GWO. The carried experiments show that GWOBP outperforms the compared algorithms.

Related External Link