To read this content please select one of the options below:

A study on performance of MHDA in training MLPs

Sree Ranjini K.S. (Homi Bhabha National Institute, Mumbai, India and Indira Gandhi Centre for Atomic Research, Kalpakkam, India)

Engineering Computations

ISSN: 0264-4401

Article publication date: 31 July 2019

Issue publication date: 15 August 2019

78

Abstract

Purpose

In recent years, the application of metaheuristics in training neural network models has gained significance due to the drawbacks of deterministic algorithms. This paper aims to propose the use of a recently developed “memory based hybrid dragonfly algorithm” (MHDA) for training multi-layer perceptron (MLP) model by finding the optimal set of weight and biases.

Design/methodology/approach

The efficiency of MHDA in training MLPs is evaluated by applying it to classification and approximation benchmark data sets. Performance comparison between MHDA and other training algorithms is carried out and the significance of results is proved by statistical methods. The computational complexity of MHDA trained MLP is estimated.

Findings

Simulation result shows that MHDA can effectively find the near optimum set of weight and biases at a higher convergence rate when compared to other training algorithms.

Originality/value

This paper presents MHDA as an alternative optimization algorithm for training MLP. MHDA can effectively optimize set of weight and biases and can be a potential trainer for MLPs.

Keywords

Citation

K.S., S.R. (2019), "A study on performance of MHDA in training MLPs", Engineering Computations, Vol. 36 No. 6, pp. 1820-1834. https://doi.org/10.1108/EC-05-2018-0216

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited

Related articles