TY - JOUR AB - Purpose– To demonstrate the scalability of the genetic hybrid algorithm (GHA) in monitoring a local neural network algorithm for difficult non‐linear/chaotic time series problems.Design/methodology/approach– GHA is a general‐purpose algorithm, spanning several areas of mathematical problem solving. If needed, GHA invokes an accelerator function at key stages of the solution process, providing it with the current population of solution vectors in the argument list of the function. The user has control over the computational stage (generation of a new population, crossover, mutation etc) and can modify the population of solution vectors, e.g. by invoking special purpose algorithms through the accelerator channel. If needed, the steps of GHA can be partly or completely superseded by the special purpose mathematical/artificial intelligence‐based algorithm. The system can be used as a package for classical mathematical programming with the genetic sub‐block deactivated. On the other hand, the algorithm can be turned into a machinery for stochastic analysis (e.g. for Monte Carlo simulation, time series modelling or neural networks), where the mathematical programming and genetic computing facilities are deactivated or appropropriately adjusted. Finally, pure evolutionary computation may be activated for studying genetic phenomena. GHA contains a flexible generic multi‐computer framework based on MPI, allowing implementations of a wide range of parallel models.Findings– The results indicate that GHA is scalable, yet due to the inherent stochasticity of neural networks and the genetic algorithm, the scalability evidence put forth in this paper is only indicative. The scalability of GHA follows from maximal node intelligence allowing minimal internodal communication in problems with independent computational blocks.Originality/value– The paper shows that GHA can be effectively run on both sequential and parallel platforms. The multicomputer layout is based on maximizing the intelligence of the nodes – all nodes are provided with the same program and the available computational support libraries – and minimizing internodal communication, hence GHA does not limit the size of the mesh in problems with independent computational tasks. VL - 37 IS - 9/10 SN - 0368-492X DO - 10.1108/03684920810907823 UR - https://doi.org/10.1108/03684920810907823 AU - Östermark Ralf ED - Mian‐yun Chen ED - Yi Lin ED - Hejing Xiong PY - 2008 Y1 - 2008/01/01 TI - Scalability of the genetic hybrid algorithm on a parallel supercomputer T2 - Kybernetes PB - Emerald Group Publishing Limited SP - 1492 EP - 1507 Y2 - 2024/04/25 ER -