Neural Networks: : An Introduction ‐ 2nd edition

John Galletly (The American University in Bulgaria)

Kybernetes

ISSN: 0368-492X

Article publication date: 1 November 1998

88

Keywords

Citation

Galletly, J. (1998), "Neural Networks: : An Introduction ‐ 2nd edition", Kybernetes, Vol. 27 No. 8, pp. 978-979. https://doi.org/10.1108/k.1998.27.8.978.3

Publisher

:

Emerald Group Publishing Limited


In this second updated and corrected edition, rather than completely updating the text with the latest neural network developments, the authors chose, instead, to concentrate their efforts on two new chapters. One describes the application of genetic algorithms to neural network learning, whilst the other discusses back‐propagation for recurrent neural networks. In addition, there is an expanded chapter on neural network applications.

The authors have adopted a three‐step approach in their organization of the material in the book: Part 1 examines models of neural networks and comprises 16 chapters; Part 2 examines the statistical physics of neural networks; while Part 3 deals with demonstration computer programs that accompany the text. The material of the second part indicates the flavour of the book: the description of neural networks is accompanied by a mathematical physics treatment.

Part 1 contains the guts of the book. It starts with a short description of the central nervous system in terms of neurons and the cerebral cortex, followed by a brief history of neural network computation. Next come chapters dealing with many mainstream neural network models: associative memory nets (both deterministic and stochastic); advanced learning strategies for associative memory; simple and multi‐layer perceptrons; Hopfield networks for combinatorial optimization problems; the Boltzmann Machine; and unsupervised learning models such as winner‐takes‐all and the Kohonen Map.

Part 2 contains the “hard‐core” physics of neural networks, and is heavy going in terms of the mathematics. It starts off with a summary of statistical mechanics and Ising spin glasses, then proceeds to examine the phase structure and equilibrium properties of the Hopfield network. One chapter then analyses the statistical properties of a Hopfield network using a partition function for the case when the number of stored patterns per neuron tends to zero; whilst another repeats the analysis for the case when the number of stored patterns per neuron is finite. A third chapter examines the storage capacity of a Hopfield network.

Part 3 contains a description of a number of useful neural network demonstrations, the software for which accompanies the book. There is an explanation of each program, together with details of some numerical experiments which may be carried out.

To this reviewer’s perception, the style, look and feel of the book is rather reminiscent of the standard text on neural networks by Hertz, Krogh and Palmer, and that is a compliment, indeed! Perhaps because both books follow a mathematical physics style. This book by Muller, Reinhardt and Strickland complements rather than mirrors the other, even though some material is covered in both. Certainly, a good knowledge of differential calculus is needed for full understanding of the mathematical treatment of neural networks. As the authors admit, the book addresses mainly an audience of physicists. Not a book for the faint‐hearted. A criticism of the material concerns the two chapters devoted to neural network applications. It is a pity that the authors do not include more practical, commercial applications, e.g. financial‐sector problems. The applications presented tend to be from a traditional, albeit interesting, mould. High energy physics particle detection is a little too esoteric. The material for the book is derived from a university graduate course for physicists. With a wider spectrum of applications, the book would reach a wider range of audience.

Related articles