Moth-Flame Optimization for Training Multi-layer Perceptrons

Citation:
Yamanya, W., A. T. Mohammed Fawzy, and A. E. Hassanien, "Moth-Flame Optimization for Training Multi-layer Perceptrons", IEEE iInternational Computer Engineering Conference - ICENCO , Cairo, 30 Dec, 2015.

Date Presented:

30 Dec

Multi-Layer Perceptron (MLP) is one of the Feed-
Forward Neural Networks (FFNNs) types. Searching for weights
and biases in MLP is important to achieve minimum training
error. In this paper, Moth-Flame Optimizer (MFO) are adapted
to train Multi-Layer Perceptron (MLP) and searching for the
weights and biases of the MLP to achieve minimum error and
high classification rate. Five standard classification datasets and
three function approximation datasets are utilized to evaluate the
performance of the presented approach. The presented approach
is compared with four well-known optimization algorithms,
namely, Genetic Algorithm (GA), Particle Swarm Optimization
(PSO), Ant Colony Optimization (ACO), and Evolution Strategy
(ES). The experimental results prove that the MFO algorithm is
very competitive, solves the local optima problem, and it achieves
a high accuracy.

Tourism