Cambrian Intelligence: The Early History of the New AI

Kybernetes

ISSN: 0368-492X

Article publication date: 1 February 2001

134

Keywords

Citation

Andrew, A.M. (2001), "Cambrian Intelligence: The Early History of the New AI", Kybernetes, Vol. 30 No. 1, pp. 103-115. https://doi.org/10.1108/k.2001.30.1.103.4

Publisher

:

Emerald Group Publishing Limited


The main title presumably refers to the fact that the developments reported are from the famous Artificial Intelligence Laboratory of MIT in Cambridge, Massachusetts, though the additional suggestion of a link to a geological period in which life forms merged is pretty certainly not accidental. This somewhat laboured pun should not disguise the fact that the contents of this book carry an extremely important message, amounting to a defence and partial development of a new approach to AI, together with debunking of much that has gone before.

Strictly, the debunking is of the relevance of what has gone before to the achievement of intelligent operation in real‐world environments. Mainstream AI is not debunked as making sound contributions to certain kinds of abstract problem solving. The author claims, however, that it makes the wrong decomposition of the problem of achieving intelligent operation and that a series of robotic devices developed by him and his colleagues indicate an enormously more promising new direction.

Apart from the “Preface” and short introductory notes to each chapter, the material is not new and is a collection of eight papers previously published in various places from 1986 onward. Four of them are primarily technical and‐four philosophical, and they provide a valuable introduction and defence of the new ideas. Some of the special terms denoting its principles have appeared elsewhere in the literature and are reasonably familiar, such as “embedded systems”, “situated agents”, “subsumption architecture”, “behaviour‐based robotics” and “physical grounding”.

The central argument is that the traditional AI decomposition into perception, modelling, planning, task execution and motor control is unworkable and that an enormous amount of effort has been devoted to modelling and planning without appreciation of the difficulty of coupling these to the real world through implementations of perception and motor control. The latter are usually seen as uninteresting and ignored by workers in AI. Even the much‐acclaimed robots referred to as Shakey and Flakey operated in specially adapted environments, and even then painfully slowly.

The alternative explored by Brooks and his colleagues is to build robotic systems incrementally, with a much closer coupling of sensors to actuators. There is ample evidence that biological systems depend, at least partly, on relatively close coupling, with obvious examples in spinal reflexes. The decomposition in the revised method is into layers of behaviour, of which the first may be simply the requirement of avoiding collisions with other objects. This layer is implemented and is adequate to determine the behaviour to which it refers, and is fully debugged in real‐world interactions before the next layer is added, and so on. No layer influences any higher layer, but higher layers can interrupt and replace input or output signals of lower layers. On top of the “avoid objects” layer there can be successive layers determining behaviour referred to as “wander”, “explore” and “build maps”.

These rather elementary considerations suggest a preoccupation with simple goals, but in fact, by building on humble beginnings, the approach has already achieved more than has been done by traditional methods. The robots move purposefully, and one of them collects empty drinks cans, in an office space that has deliberately not been tidied or simplified or kept clear of cleaners and other passing traffic. Naturally, the needs of space exploration have encouraged the development of such autonomous devices (tested in environments that are realistic, but not the office one) as “planetary rovers”. In the second chapter there is a description of the application of the principles to a six‐legged walking robot which can cover rough terrain, and in the third it is shown that a useful form of map‐making ability, without the construction of an explicit representation, can operate within the same general framework. Brooks says he was much encouraged in his approach by this achievement.

As he hints in the book’s title, Brooks is well aware that his approach corresponds to what was presumably the course of biological evolution. More could be made of this aspect, and the theory might be related to the development of the nervous system, of which parts are known to differ in phylogenetic age, with the cerebral cortex relatively young. Brooks quotes elapsed intervals in evolutionary development, such as a billion years from the beginnings of life to photosynthetic plants, then another billion and a half till fish and vertebrates appeared, with subsequent development achieved in a relatively short space of time. These suggest that the “hard part” of the evolution of intelligence lies in the low‐level interactions rather than in the “intellectual” activities that have been the target of traditional AI.

The robot control systems have been implemented in units of “augmented finite state machines” for which no biological parallel is claimed. They allow interesting behaviour without great computational complexity, and one of Brooks’ aims is to let the control systems be sufficiently compact and economical in power consumption that the robots can be self‐contained. “Silicon compilation” has been used to implement the necessary control on a single chip.

It has been demonstrated that the approach produces interesting robot behaviour, but the bold claim that this amounts to a new and powerful approach to the whole of AI is clearly more controversial. Hybrid schemes, combining the behaviour‐based methods with those of traditional AI, can readily be visualised but Brooks advocates what could be called a pure behaviour‐based approach, without the deliberate implementation of internal representations or reasoning procedures or planning. Behaviours that can be interpreted as indicating these timorions should be expected to emerge, and should be, like “intelligence” itself, in the eye of the beholder. Brooks admits that this is just his “hunch”, and that only time will tell, but in the four “philosophical” chapters he presents various compelling arguments. The final chapter, occupying about a quarter of the book and having originated as a special address to an international AI conference, is a very thorough and persuasive review.

The history of the American Autonomous Land Vehicle project illustrates the need for a change of approach. A very large amount of work was devoted to the solution of this problem by the traditional AI techniques of scene analysis and internal modelling of the environment, etc., but the system used in the final demonstration vehicle involved relatively direct coupling of sensors to actuators essentially as in the new behaviour‐based methods. The notes on the back cover of the book say that in the meantime the author has used the methods to produce humanoid robots, including one called “Cog”, but no details of these appear in the inner pages. There is, however, plenty of food for thought without them. (Although not mentioned in the book, information on “Cog” and the “The Cog Shop” can he found at the Web site: <http://www.ai.mit.edu/projects/cog/>, and on the work on mobile robots in general at: <http://www.ai.mit.edu/projects/mobilerobots/>, where also another important potential application area is mentioned, namely the clearing of land mines.)

Related articles