To demonstrate the scalability of the genetic hybrid algorithm (GHA) in monitoring a local neural network algorithm for difficult non‐linear/chaotic time series problems.
GHA is a general‐purpose algorithm, spanning several areas of mathematical problem solving. If needed, GHA invokes an accelerator function at key stages of the solution process, providing it with the current population of solution vectors in the argument list of the function. The user has control over the computational stage (generation of a new population, crossover, mutation etc) and can modify the population of solution vectors, e.g. by invoking special purpose algorithms through the accelerator channel. If needed, the steps of GHA can be partly or completely superseded by the special purpose mathematical/artificial intelligence‐based algorithm. The system can be used as a package for classical mathematical programming with the genetic sub‐block deactivated. On the other hand, the algorithm can be turned into a machinery for stochastic analysis (e.g. for Monte Carlo simulation, time series modelling or neural networks), where the mathematical programming and genetic computing facilities are deactivated or appropropriately adjusted. Finally, pure evolutionary computation may be activated for studying genetic phenomena. GHA contains a flexible generic multi‐computer framework based on MPI, allowing implementations of a wide range of parallel models.
The results indicate that GHA is scalable, yet due to the inherent stochasticity of neural networks and the genetic algorithm, the scalability evidence put forth in this paper is only indicative. The scalability of GHA follows from maximal node intelligence allowing minimal internodal communication in problems with independent computational blocks.
The paper shows that GHA can be effectively run on both sequential and parallel platforms. The multicomputer layout is based on maximizing the intelligence of the nodes – all nodes are provided with the same program and the available computational support libraries – and minimizing internodal communication, hence GHA does not limit the size of the mesh in problems with independent computational tasks.
CitationDownload as .RIS
Emerald Group Publishing Limited
Copyright © 2008, Emerald Group Publishing Limited