CitationDownload as .RIS
Emerald Group Publishing Limited
Copyright © 1998, MCB UP Limited
The conscious PC
The conscious PC
Professor Igor Aleksander, Imperial College, London, UK,who is a leading researcher in artificial intelligence, believes that by the year 2040 we will feel guilty when we turn off a personal computer. Such machines, he says, will probably be conscious they will possess a consciousness of their own. Addressing the British Association for the Advancement of Science, at Cardiff, UK in September (1998) he said that:
In 40 years' time we may feel a pang of guilt when we turn our computers off. By then we would not think of buying a computer that was not conscious. In fact their consciousness should be indistinguishable from ours.
Such a computer, we were told, would not feel intimations of mortality when it was turned off. It will know that its back-up stores are still functioning so that it is not being eliminated. He said that:
The term "machine consciousness" is very important. Consciousness of anything we build will be particular to the hardware, technology or software that that thing is made of.
The principles of how artificial consciousness would emerge, he believes, are not many miles away from how real consciousness emerges from real systems. Machine emotions started with the ability to run away from situations and to approach others, which he likened to fear and pleasure. From those, it might be possible to develop more subtle machine emotions. He stated his overall aim as:
... not to produce highly-strung computers. The point is not to make robots that go around being depressed. That has been done beautifully by Douglas Adams. The point is to understand what happens to humans when they get depressed.
Professor Aleksander has been carrying out research at Imperial College for many years and his work is recognised worldwide. To back up his latest theories he has designed a computer program called Magnus. This system, he claims, already shows some evidence of consciousness. At the British Association Science Festival, at the University of Wales at Cardiff, he spoke about the system having:
... a sense of where it has come from and where it would like to go. It sometimes makes arbitrary decisions. In human beings, we might call that free will. It would probably use language and be quite responsive to vision, so that you could show it things you are describing. Science fiction is way ahead of us in this, but that is only because it can look ahead and see what is possible. The big change is that a conscious computer might answer a problem by saying, "I see what you mean, but I think we should do X, Y or Z". It could conceivably disagree with you and argue. When a computer starts using the word "I" in that context then we will know that it is fully conscious.
The Magnus system has shown that, to some extent, it could feel the quality of things, such as "redness" or "ballness" when visualising a red ball. This concept, known as Qualia, has traditionally been used by philosophers as proof that consciousness is a human condition that can never be replicated in a machine. Magnus has a million neurons and was tiny compared with even small specialised parts of the brain. It is, of course, hoped that such technology will lead to conscious computers.
Readers will recall that it was the author Douglas Adams who thought up the depressed robot he called "Marvin the Paranoid Android" in his radio play The Hitch Hiker's Guide to the Galaxy. Professor Aleksander, however, reiterates his belief that what we need to do is not to just build such robots but to try to understand what happens when humans get into that condition.
The main challenge that he identified was that of learning the structure of language for the development of thinking, talking computers. He summed up some of his ideas by saying that:
At the moment we talk to these systems and they talk to us by drawing pictures on the screen, but that is just a question of technology. The thing is it is going to be pretty unsurprising when it happens. At the moment you can buy a piece of software for £25 that enables you to talk to a computer. It does not understand anything, but one day it will.
Most cyberneticians and systemists will support this view. There is no doubt that many of the advances we are talking about will come to fruition in the next millennium. One of our problems, however, is that we give the impression to the public at large that there are already systems that are capable of performing such functions. There is no doubt that artificial intelligence will require many major breakthroughs before such goals can be achieved. As so many AI enthusiasts have repeatedly told us, the current state of AI is that we are gradually getting to the top of the tree when in fact we need to reach the moon. Some scientists believe we should get back to ground level and start again in the same way that space was not to be conquered initially from the air in aeroplanes, but rather with the new space initiatives based on the earth's surface.