The Morality of Autonomous Robots
https://doi.org/10.1080/15027570.2013.818399…
20 pages
1 file
Sign up for access to the world's latest research
Abstract
While there are many issues to be raised in using lethal autonomous robotic weapons, we argue that the most important question is: Should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability or operational questions of chain of command. We further argue that the answer must be "no" and offer three reasons for banning autonomous robots. 1) Such a robot treats a human as an object, instead of a person with inherent dignity. 2) A machine run by a program has no human emotions, no feelings about the seriousness of killing a human. 3) Using such a robot would be a violation of military honor. We therefore conclude that the use of an autonomous robot (not a remotely operated robot) in lethal operations should be banned for a first strike, but leave open the possibility of retaliatory or defensive use.
Key takeaways
AI
AI
- Autonomous robots should not make lethal decisions; human dignity must be preserved.
- Killing decisions must remain a human responsibility to uphold moral integrity.
- The paper advocates banning autonomous robots in first strike scenarios.
- Military honor and ethical considerations necessitate human involvement in warfare.
- Existing laws of war need amendments to address autonomous weaponry effectively.
Related papers
Ethics and Information Technology, 2010
Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This paper will focus on these claims by looking at what has been discovered about the capability of humans to behave ethically on the battlefield, and then comparing those findings with the claims made by robotics researchers that their machines are able to behave more ethically on the battlefield than human soldiers. Throughout the paper we will explore the philosophical critique of this claim and also look at how the robots of today are impacting our ability to fight wars in a just manner.
Ethics and Robotics, 2009
Killing with robots is no more a future scenario but became a reality in the first decade of the 21st century. The U.S. and Israel forces are using uninhabited combat aerial vehicles (UCAVs) in their so-called wars on terror, especially for targeted killing missions in Iraq, Pakistan, Afghanistan as well as in Lebanon and the Palestinian occupied territories (for example in Israel's recent war on Gaza). In the last years, the number of UCAV air attacks is rising significantly as well as the number of killed civilians. Nevertheless, the automation of warfare is envisioned by the US government and military for 2032 at the latest and military robots are increasingly used in civilian contexts. In the face of these developments, discussions on robotic warfare as well as security technology from a science and technology studies and technoethical perspective are highly needed. Important questions are how robotic warfare and security applications may find their way into society on a broad scale and whether this might lead to a new global arms race, violation of the international law of warfare, an increasing endangerment of civilians transporting racist and sexist implications, and the blurring of boundaries between military, police and civil society.
Mechanisms and Machine Science, 2025
This article investigates the growing use of robots and automation in military operations, emphasizing the ethical challenges posed to international humanitarian law. The Iraq War marked a key shift, transforming robots from tools viewed skeptically to vital military assets. By 2006, robots had executed over 30,000 missions, and demand for unmanned aerial vehicles (UAVs) surged. These technologies span military branches, including the navy's use of unmanned submarines. The focus is on Lethal Autonomous Weapon Systems (LAWS), which can independently make combat decisions. Nations like the U.S., China, and Russia are advancing LAWS, raising ethical concerns about autonomous warfare. The study aims to clarify issues surrounding LAWS, examine international arms control discourse, and propose regulatory strategies. Key areas of discussion include defining LAWS, reviewing debates under the Convention on Certain Conventional Weapons (CCW), addressing regulatory challenges, and suggesting regulation methods for dual-use technology weapons. The article stresses the need for preemptive arms control to limit LAWS development and anticipates future ethical and military landscapes shaped by these technologies. It calls for aligning future LAWS regulations with existing frameworks to manage their impact effectively.
Pak. Journal of Int’L Affairs, Vol 5, Issue 1 , 2022
Artificial intelligence and technological advancements have headed to the development of robots capable of performing various functions. One of the purposes of robots is to replace human soldiers on battlefields. Killer robots, referred to as "autonomous weapon systems," pose a threat to the principles of human accountability that underpin the international criminal justice system and the current law of war that has arisen to support and enforce it. It poses a challenge to the Law of War's conceptual framework. In the worst-case scenario, it might encourage the development of weapons systems specifically to avoid liability for the conduct of the war by both the government and individuals. Furthermore, killer robots cannot comply with the fundamental law of war principles like the principle of responsibility. The accountability of autonomous
Journal of Military Ethics, 2015
Philosophy & Technology, 2011
Ethical reflections on military robotics can be enriched by a better understanding of the nature and role of these technologies and by putting robotics into context in various ways. Discussing a range of ethical questions, this paper challenges the prevalent assumptions that military ...
The debate on and around “killer robots” has been firmly established at the crossroads of ethical, legal, political, strategic, and scientific discourses. Flourishing at the two opposite poles, with a few contributors caught in the middle, the polemic still falls short of a detailed, balanced, and systematic analysis. It is for these reasons that we focus on the nitty-gritties, multiple pros and cons, and implications of autonomous weapon systems (AWS) for the prospects of the international order. Moreover, a nuanced discussion needs to feature the considerations of their technological continuity vs. novelty. The analysis begins with properly delimiting the AWS category as fully autonomous (lethal) weapon systems, capable of operating without human control or supervision, including in dynamic and unstructured environments, and capable of engaging in independent (lethal) decision-making, targeting, and firing, including in an offensive manner. As its primary goal, the article aims to move the existing debate to the level of a first-order structure and offers its comprehensive operationalisation. We propose an original framework based on a thorough analysis of six specific dilemmas, and detailing the pro/con argument for each of those: (1) (un)predictability of AWS performance; (2) dehumanization of lethal decision-making; (3) depersonalisation of enemy (non-)combatant; (4) human-machine nexus in coordinated operations; (5) strategic considerations; (6) AWS operation in law(less) zone. What follows are concluding remarks. Keywords: autonomous weapon systems, killer robots, lethal decisionmaking, military ethics, artificial intelligence, security regulation, humanitarian law, revolution in military affairs, military strategy
This paper explores and presents a novel scientific paradigm to the ethics, methodologies, and dichotomy of autonomous military robot systems used to advance and dynamically change how warfare in the twenty first century is conducted; and judged from an ethical, moral and legal perspective with the aim of creating a new concept through a scientific survey.
2013
While the use of telerobotic and semi-autonomous weapons systems has been enthusiastically embraced by politicians and militaries around the world, their deployment has not gone without criticism. Strong critics such as Asaro (
Current Robotics Reports, 2020
Abstract Purpose of Review: To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention. Recent Findings: A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems. Summary: The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence. Keywords: Autonomous weapons systems, Roboethics, International humanitarian law, Human-robot shared control, Meaningful human control

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.