Artificial beings: the conscience of a conscious machine

Industrial Robot

ISSN: 0143-991x

Article publication date: 19 October 2010

168

Citation

(2010), "Artificial beings: the conscience of a conscious machine", Industrial Robot, Vol. 37 No. 6. https://doi.org/10.1108/ir.2010.04937fae.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2010, Emerald Group Publishing Limited


Artificial beings: the conscience of a conscious machine

Article Type: Book review From: Industrial Robot: An International Journal, Volume 37, Issue 6

Jacques PitratISTE Ltd and Wiley,March, 2009,$90,288 pp.,ISBN: 9781848211018,web site: www.iste.co.uk/index.php?f=x&ACTION=View&id=257,

In Artificial Beings by Jacques Pitrat, we have a thorough meditation by the author on the alternative ways in which machines can achieve their own kind of conscientious consciousness. Pitrat is a fellow of AAAI and ECCAI and he draws on his over thirty years of work in the field of AI. His corpus of work includes the CAIA (Chercheur Artificiel en Intelligence Artificielle) system, which was developed over decades of work to be a kind of artificial scientist. The book presents Pitrat’s considered arguments in favor of a kind of hard AI that will eventually involve a new kind of machine consciousness. The great strength of the book is that Pitrat is in the enviable position of having an actual system to talk about and use as an example throughout the book. While Pitrat has one foot in the shifting sands of philosophy, his other foot is firmly rooted in computer science and this should make the book appealing to the readers of this journal.

After introducing his work on the artificial scientist CAIA, Pitrat then begins a discussion on machine consciousness. This is notoriously dangerous, and philosophical, territory. It is one that most computer scientists are secretly interested in, but they usually wait until reaching emeritus status before they dare to publish their thoughts on the topic. This is due to the way the phenomenon of consciousness seems to avoid scientific scrutiny and remains a metaphysical concept. Pitrat uses a strategy that a number of modern commentators on consciousness use, and that is to begin by acknowledging all the changing, and sometimes contradictory, meanings that other authors have used to describe consciousness, and instead develop a more modest and attainable kind of artificial consciousness that will be, by its very nature, somewhat alien to the consciousness we personally experience. The reader should be aware that this is not primarily a book about machine consciousness; in fact, Pitrat seems to only be entertaining notions of machine consciousness as he believes it to be a necessary ingredient for a machine conscience or machine ethics. This is where the book gets interesting; Pitrart is arguing that machine conscience and consciousness must be developed simultaneously.

Pitrat goes on to argue that since machines are made of very different stuff than humans are, there will be distinct affordances that machines will have over human agents. For instance, they will have the ability to be conscious of much larger amounts of data than human agents can and therefore will draw conclusions, and plans of action, that would be beyond the abilities of human agents. Likewise, a new and very different form of machine conscience might also develop. Since these machines will not be bound to many of the pressing concerns of humans that often conflict with moral agency, like survival, reproduction, hording scares goods, etc. Evolution has placed these problems at the feet of human agency but machines would be beyond those influences. This will free the artificial moral agents to consider many alternative courses of action that would be too alien for human agents to contemplate.

Conscience and ethics is all about finding the right relationship between the interests of the self and the other. Selfness and other-hood are a relatively easy concept for natural agents to ponder as the natural agent is embodied in a particular body and has access to the thoughts of only one mind, that body and mind is the self and the rest is other. Pitrat realizes that the situation is quite a bit different with artificial agents distributed across complex computational systems. To wit, he embarks on a lengthy discussion on just what it means for an artificial agent to clam something of ‘itself.’ During this discussion Pitrat draws on Marvin Minsky’s Society of Mind thesis and adapts it to his CAIA system where certain computational subsystems, all seeking their own disparate goals can form a society of mind that seems to coalesces into a particular goal driven activities that resembles a kind of conscious behavior.

Pitrat calls this “auto-observation” and claims that it is not only necessary for task accomplishment but also for contentious action. Any modest computational system can be directed at achieving some end, but a complex society of these systems with the ability to auto-observe the entire system as a whole, can then set goals for itself. These goals might lead to self evaluation and even the eventual elevation of the systems behavior towards an artificial morality. During this discussion Pitrat takes a shot at updating Asimov’s famous three laws of robotics (including Azimov’s later added “zeroth” law). Asimov was prescient enough to see the eventual need for roboethics but the laws he gave us in his books are intentionally flawed and lead to a large number of entertaining conundrums and paradoxes that he used as plot devices in his numerous robot stories. Pitrat sees the inflexibility of the Asimov laws and suggests that the only way around these issues would be to give the artificial agents the ability to learn and edit their own rules of behavior. Meaning, there is no comprehensive set of rules we can preprogram into the machine, instead we can only program limited guidelines and give the machine the ability to learn and modify its behavior as needed. This kind of system is more of an artificial conscience rather than a list of inviolate commandments. Pitrat then ends the book with a discussion on how this artificial conscience is implemented in CAIA and how it helps the system actually become better at doing science.

Pitrat’s book is a solid addition to the growing literature on artificial ethical systems and should be read by anyone doing work in that field. It is also of interest to those who like to think big about the promise and perils of AI.

John P. SullinsAssociate Professor, Philosophy, Sonoma State University, Rohnert Park, California, USA

Related articles