Emerald Group Publishing Limited
Copyright © 2004, Emerald Group Publishing Limited
Our theme for this issue is “Robotics in the military and aerospace industries” – both areas that fully comply with Professor Red Whittaker's well known statement that “If a person needs to suit up, then a robot should be doing the job instead”.
When Beagle 2 crash landed on Mars the resultant terminal loss of communication was a great disappointment to all those involved in the project, but at least we were not faced with the loss of life of a Challenger type disaster or the so nearly disastrous Apollo 13 mission.
As a young lad I was thrilled by the manned Moon landings and “One small step...” is as etched on my visual and aural memory as anyone else's. If you can put a price on the raising of the human spirit, or even on political expediency, then many would argue that this overwhelms the cost of a relatively few personal tragedies. I am not sure where I stand on this issue; however, I am pretty certain that if scientific advancement is the primary goal then robotic space exploration is the way to go.
For one thing it is a lot cheaper, and also the scientists can afford to take more risks and push the boundaries far more than they would dare if people were involved.
What though of military robotics? The US military for one has a clearly stated policy of removing as many people from the battlefield as possible with robotic tanks and aircraft being high on the development list.
Is this a good thing?
Up until the Second World War most fighting was largely done strictly between armies, and up to a few 100 years ago a battle was a bit of a spectator sport. This “Rule of Engagement” works fine while each side has a reasonable chance of winning, but what if technology removes the risk from military activities in the same way that it removes the risk from planetary exploration? Are the robot owners laying themselves wide open for reprisals for which they have no robotic safeguards?
It is a sad fact of war, however, that it provides the funds and manpower resources for a great deal of technological developments. The global positioning system is just one example for which I, as a sailor, am particularly grateful.
The DARPA challenge (DARPA grand challenge – a pioneering even for autonomous robotic ground vechicles, pp. 414–422) is another to which I personally would give ten out of ten as a worthwhile endeavour. So, on balance I would vote in favour of the development of military robotics – but with a cautionary note...
Isaac Asimov defined his three laws of robotics as follows.
Robots must never harm human beings or, through inaction, allow a human being to come to harm.
Robots must follow instructions from humans without violating rule 1.
Robots must protect themselves without violating the other rules.
The simplicity of these rules belies their sophistication as anyone who has read Asimov's novels will appreciate. I would not suggest that they should be interpreted too literally – but they should not be abandoned without careful consideration and a detailed analysis of the consequences.