Keywords
Citation
Andrew, A.M. (2004), "Introduction to Evolutionary Computing", Kybernetes, Vol. 33 No. 5/6, pp. 1064-1065. https://doi.org/10.1108/03684920410699216
Publisher
:Emerald Group Publishing Limited
Copyright © 2004, Emerald Group Publishing Limited
This is intended primarily as a textbook for lecturers and graduate and undergraduate students, but will certainly attract a wider readership. The authors explain that each of them has many years of teaching experience, and has given instruction on evolutionary computing (EC) both in his home university and elsewhere, and they realised the need for a suitable textbook and decided to write this one. They have provided examples of practical applications in most chapters, as well as exercises for the student and suggestions for further reading. There is also a Web site from which teaching material including a PowerPoint presentation can be downloaded and used freely.
The basic ideas of neo‐Darwinian evolution are briefly reviewed since they provide the inspiration for EC methods, but the emphasis is on practical computing and special stratagems are introduced as needed, irrespective of biological parallels. EC methods are characterised by being population based, so that a whole collection of candidate solutions are held and processed simultaneously, with some form of recombination to mix the solutions, and stochastic features expressed as random recombination and mutation.
The term EC is used to cover four “dialects” denoted by genetic algorithms (GA), evolution strategies (ES), evolutionary programming (EP) and genetic programming (GP). These operate on different kinds of population units, or representations. GA operates on strings of symbols from a finite alphabet, analogous to the genetic code, whereas ES operates on real‐value vectors, EP on finite state machines, and GP on tree structures.
The power of the methods is demonstrated by the examples of applications described. Some of these may seem esoteric, referring to optimisation of specially‐devised test functions, or the colouring of graphs, or the problem of placing eight queens on a chessboard so that none puts another into check, or learning to play checkers (draughts). Others, however, such as the “knapsack problem”, can be argued to model a class of practical problems, and there are others that are thoroughly practical, including scheduling and timetabling and modelling financial markets. A particularly convincing example is mentioned in the first chapter, where an irregular lattice structure that was designed using a GA algorithm is shown. Although it looks as though it was the result of being run over by a car, it is vastly superior to a regular structure in an application where vibrations could be troublesome.
Besides serving as an introduction the book is a guide to the state‐of‐the‐art. One advanced topic is self‐adaptation, whereby the adaptive process itself is improved adaptively. This is developed in connection with ES, which deals with continuous variables, and the adaptive improvements include the adjustment of step sizes and reorientation of axes in the test hyperspace. This has at least superficial similarity to considerations explored by Rosenbrock (1960) on optimisation using a single operating point.
It is also acknowledged that evolution, although epitomised as “survival of the fittest” can involve co‐operation as well as competition, and results on the “prisoners' dilemma” and evolution of altruism are mentioned. Surprisingly, the Gaia hypothesis of Lovelock (1979) is not mentioned although it provides strong evidence for co‐operation in evolution.
All the methods depend on the specification of a fitness function to govern survival. The possibility of interactive operation where a human operator evaluates fitness is treated, with examples of evolutionary art so produced.
This is a well‐produced and very useful book.
References
Lovelock, J.E. (1979), Gaia: A New Look at Life on Earth, Oxford University Press, Oxford.
Rosenbrock, H.H. (1960), “An automatic method for finding the greatest or least value of a function”, Computer J., Vol. 3, pp. 75‐184.