Academia.eduAcademia.edu

Outline

The Morality of Autonomous Robots

https://doi.org/10.1080/15027570.2013.818399

Abstract

While there are many issues to be raised in using lethal autonomous robotic weapons, we argue that the most important question is: Should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability or operational questions of chain of command. We further argue that the answer must be "no" and offer three reasons for banning autonomous robots. 1) Such a robot treats a human as an object, instead of a person with inherent dignity. 2) A machine run by a program has no human emotions, no feelings about the seriousness of killing a human. 3) Using such a robot would be a violation of military honor. We therefore conclude that the use of an autonomous robot (not a remotely operated robot) in lethal operations should be banned for a first strike, but leave open the possibility of retaliatory or defensive use.

Key takeaways
sparkles

AI

  1. Autonomous robots should not make lethal decisions; human dignity must be preserved.
  2. Killing decisions must remain a human responsibility to uphold moral integrity.
  3. The paper advocates banning autonomous robots in first strike scenarios.
  4. Military honor and ethical considerations necessitate human involvement in warfare.
  5. Existing laws of war need amendments to address autonomous weaponry effectively.