Reasoning about Rational Agents

Kybernetes

ISSN: 0368-492X

Article publication date: 1 February 2002

83

Citation

Andrew, A.M. (2002), "Reasoning about Rational Agents", Kybernetes, Vol. 31 No. 1. https://doi.org/10.1108/k.2002.06731aae.005

Publisher

:

Emerald Group Publishing Limited

Copyright © 2002, MCB UP Limited


Reasoning about Rational Agents

Reasoning about Rational Agents

Michael WooldridgeMIT PressCambridge, MA2000xi + 227 pp.ISBN 0-262-23213-8, hardback£23.50

By an "agent" is meant an entity, possibly a person, capable of independent autonomous action, and the description as "rational" implies that the agent makes good decisions about what to do. The book is mainly concerned with principles underlying the design of computer systems that constitute rational agents, but the initial inspiration comes from a treatment that focuses on human performance and consequently comes under psychology or philosophy.

The basis is a Belief-Desire-Intention, or BDI, model. A rational agent has a set of beliefs about the environment, some of them the result of processing sensory input. The formation of beliefs is outside the scope of the present book, but references are made to works dealing with it.

The aim, of course, is to produce artefacts that will operate effectively in unpredictable complex environments which may involve interaction with other agents. There are other well established bodies of theory that go some way towards meeting the requirement, and Games Theory and Decision Theory are mentioned, with the observation that although their results are valuable and may be used they do not in themselves provide the necessary flexibility.

The book introduces and develops a special logic termed LORA, or Logic for Rational Agents. This allows the formal representation of beliefs, desires and intentions, as well as of agents holding or having them. Beliefs give rise to intentions, and a subset of these that are judged to be worth pursuing are termed "desires". (It would seem to be more natural to reverse this and to make "intentions" a subset of "desires" but the BD/model has it this way.)

Not everyone would agree that the development of a formal logic is the best way to set about the essentially AI problem of constructing rational agents, and the author discusses this. He wisely avoids making any assertions about the fundamental nature of biological intelligence, but maintains that a formal logic approach is a powerful means of producing artificial systems because of its rigour and transparency.

He also defends the introduction of a new logic, with operators such as Bel, Des and Int (for "believes", "desires" and "intends") that are not found in classical first-order logic, even though some of the requirements could be met using extremely tedious constructions within first-order logic. Purists may object to the fact that the new logic cannot be described as "fully axiomatised". The complexity of LORA, which embodies features from various specialised logics, means that there is little hope of achieving this accolade, but this is not to deny its usefulness.

The introduction of operators that are extra to classical logic brands this as a "modal" logic. Each of Bel, Des and Int has two arguments, so that for example Bel(x, y) might be used to mean that agent x believes y, where y would be a logical expression. Since the expression for the belief has to be used in ways that do not correspond to an ordinary argument of a function, a notation similar to a LISP statement is preferred, as (Bel x, y). What is believed by an agent may of course refer to beliefs held by other agents.

As well as being modal, LORA is necessarily a temporal logic, and at the same time it has to be a logic capable of producing actions. For its temporal function, past history is considered to be a fixed sequence of events, but the future branches into a tree of possibilities, with some of the choices influenced by actions of the agent concerned. Variants of the existential quantifier ("there exists, such that ...") and the universal quantifier ("for all ...") are introduced to refer to paths in the tree, so that the existential path quantifier is a means of asserting that a statement is true for some path within a given set, while the universal path quantifier allows a similar assertion for all paths.

The author is well aware that he will have a mixed audience, with some readers unfamiliar with formal logic while others may be experts. He handles the situation admirably, with a chapter introducing LORA fairly informally and then another that includes formal proofs, etc. and an even more formal treatment in an Appendix. Then later chapters deal with collective mental states of agents, communication (including the action of initiating or requesting a "speech act"), and co-operation.

All this is presented very clearly and persuasively and the reader is assured right at the beginning that the methods have been used successfully in the control system of an autonomous space probe and in automatic Internet-based systems performing commercial transactions. The basic theory as given clearly needs to be supplemented by some imaginative development to achieve such ends, but we are assured it has been done. The final chapter discusses the transition from the logic specification to a working system, with several sections headed "Case Study" but remaining fairly general. A special programming language called METATEM is introduced.

This approach using formal logic, with many references back to earlier work in AI, is in sharp contrast to other developments in AI and Robotics which favour relatively simple autonomous devices. The arguments of Brooks (1999) and those advanced by Warwick (1998) in connection with simple robots at Reading University are examples of this alternative trend. A common feature of the approaches is that both emphasise the need to consider "situated" agents rather than such lone performers as mathematical theorem-provers or chess machines.

Another common feature of the two approaches is that both have been applied to space exploration, the "simple robot" one to the design of planetary rovers and the formal logic one to control of an autonomous space probe. Some of the most challenging problems arise when the agent has to take account of the beliefs, desires and intentions of other agents. An agent dealing in shares, for example, would try to guess the intentions of other agents influencing the market. We are not told how well LORA deals with "brainteaser" situations of the type of the "Three wise men" problem [1] discussed by Konolige (1982), though that paper is included in the references and the work is clearly known to the author of the present book.

The book is a lucid and authoritative treatment of the logic approach, which could be worth dipping into purely for its relatively painless introduction to formal logic as such. Its topics are linked to a bibliography of no less than 251 items.

Alex M. Andrew

Note1. The problem is as follows. There was once a king who had three wise men and wanted to decide which was wisest. On the forehead of each he made a mark, either black or white, such that each could see the marks of the other two but not his own. He also told them (and each knew the others had been told) that there was at least one white mark. The wisest would be the first to declare the colour of his own mark. There was no communication between the three, except that each would know if another solved the problem first. After a time one of them gave the right answer. How was this possible? If the successful participant is called A, he was able to reason as follows: "If my spot is black, then one of my colleagues, say B, can see a black spot and a white one. In that case, B could infer from C's silence that his (B's) spot is not black. Since B is silent, my supposition is wrong and my spot must be white."

References

Brooks, Rodney A. (1999), Cambrian Intelligence: The Early History of the New AI, MIT Press, Cambridge, Mass.

Konolige, K. (1982), "A first-order formalisation of knowledge and action for a multi-agent planning system", in: Hayes, J.E., Michie, D. and Pao, Y.-H. (eds), Machine Intelligence 10, Ellis Horwood, Chichester, pp. 41-72.

Warwick, Kevin (1998) In the Mind of the Machine: The Breakthrough in Artificial Intelligence, Arrow Books, London.

Related articles