Military robotics and the robotics community’s responsibility

Industrial Robot

ISSN: 0143-991x

Article publication date: 23 August 2011

1017

Citation

Arkin, R. (2011), "Military robotics and the robotics community’s responsibility", Industrial Robot, Vol. 38 No. 5. https://doi.org/10.1108/ir.2011.04938eaa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2011, Emerald Group Publishing Limited


Military robotics and the robotics community’s responsibility

Article Type: Viewpoint From: Industrial Robot: An International Journal, Volume 38, Issue 5

Unfortunately, mankind seemingly will persist in conducting warfare, as evidenced over all recorded history until the present day. New technology has historically made such killing more efficient, e.g. the longbow, artillery, armored vehicles, aircraft carriers, or nuclear weapons. Many view that each of these new technologies produced a revolution in military affairs (RMA) as they fundamentally changed the ways in which war was waged.

Many consider robotics technology a potentially new RMA, especially as we move toward more and more autonomous systems in the battlefield. This will result in changes involving tactics, precision, and perhaps, if done correctly, a reduction in atrocities as outlined in research conducted in our laboratory.

But this emerging technology can lead us into many different futures, some dystopian. It is crucially important that we not rush headlong into the design, development, and deployment of these systems without thoroughly examining their consequences on all parties: friendly forces, enemy combatants, civilians, and society in general. This can only be done through reasoned discussion of the issues associated with this new technology. The tempo of the battlefield is now outpacing the warfighter’s ability to be able to make sound rational decisions in the heat of combat. Multiple potential benefits of intelligent war machines have been declared, including: a reduction in friendly casualties; force multiplication; greater precision due to persistent stare; the ability to integrate far more information from many more sources far faster than any human with the advent of network-centric warfare; and the removal of fear, anger, frustration, and other cognitive problems from the process of killing. This could possibly even yield a greater adherence to the laws of war by robotic systems than from using soldiers of flesh and blood alone.

But there are many counterarguments as well. These include the establishment of responsibility for war crimes involving autonomous weaponry, the potential lowering of the threshold for entry into war, the military’s possible reluctance of giving robots the right to refuse an order, proliferation, and mission creep to name but a few.

Nonetheless, is it not our responsibility as scientists to look for effective ways to reduce man’s inhumanity to man through technology? Where is this more evident than in the battlefield? Research in ethical military robotics can and should be applied toward achieving this end. While I am not averse to the outright banning of lethal autonomous systems in the battlefield, I believe that if these systems were properly inculcated with a moral ability to adhere to the laws of war and rules of engagement they could outperform human soldiers with respect to humaneness. The end product then could be, despite the fact that these systems could not ever be expected to be perfectly ethical, a saving of non-combatant lives when compared to human warfighters’ behavior.

This is obviously a controversial assertion, and I have often stated that the discussion my research engenders on this topic is as important as the research itself. We must continue to examine the development of lethal autonomous systems in forums such as the United Nations and the International Committee of the Red Cross (Geneva Conventions) to ensure that the internationally agreed upon standards regarding the way in which war is waged are adhered to as this technology proceeds forward. If we ignore this, we do so at our own peril.

Roboticists should lead this discussion and fully acknowledge their complicity in the creation of these systems whether or not they accept funding directly from the military. If you create ideas or technology that has intellectual value, someone, someday, somewhere will put it to use in military systems. We as a community must be proactive in tackling these problems, raising our heads above the day-to-day concerns of the profession, understanding that there are bigger things at stake here than our next patent, publication, or promotion.

Working through professional societies such as the IEEE Robotics and Automation Society’s Technical Committee on Roboethics or the IEEE Society for Social Implications of Technology are but two such examples for engagement. Join the debate now so that we as a community of researchers can assume the responsibility that we bear as we move forward with respect to this new RMA.

Ronald ArkinRegents’ Professor at the School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia, USA

Related articles