Judging an AI-programmed genocide
2024, Mail & Guardian
Sign up for access to the world's latest research
Abstract
In the case of the genocide of Palestinians, the AI-programmed systems are killing in an automated fashion, revealing the human-gen- erated intention behind the horror. Such as human intention demon- strates the long-gone ideal of the rule of international law and the much- needed update of its jurisdiction.
Related papers
War and Algorithm, 2019
How international law might relate to new technologies and regulate their practices has been a pressing question long before the use of armed drones challenged conventional conceptions of warfare. Emerging algorithmic and machine learning technologies present further challenges, not only to the political dream of their regulation by law but also to the juridical form itself and its humanist presumptions. Even if international humanitarian law could grasp the phenomenon of algorithmic warfare, this law would replicate and perpetuate the asymmetries that have accompanied its historical development. Taking the US Department of Defense 'Project Maven' as an example, this chapter argues that resisting the foreclosure of human judgment on the battlefield calls for a collective political response to interrogate and unsettle the processes that contribute to the emergence of autonomous weapons.
Bloomsbury Academic, 2023
In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life. He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral, and political implications and takes into account how epistemic agency differs from moral agency. Through his insightful comparisons between human and animal intelligence, Montemayor makes it clear why adopting a need-based attention approach justifies a humanitarian framework. This is an urgent, timely argument for developing AI technologies based on international human rights agreements.
Journal of International Humanitarian Action
In the debate on how to improve efficiencies in the humanitarian sector and better meet people’s needs, the argument for the use of artificial intelligence (AI) and automated decision-making (ADMs) systems has gained significant traction and ignited controversy for its ethical and human rights-related implications.Setting aside the implications of introducing unmanned and automated systems in warfare, we focus instead on the impact of the adoption of AI-based ADMs in humanitarian response. In order to maintain the status and protection conferred by the humanitarian mandate, aid organizations are called to abide by a broad set of rules condensed in the humanitarian principles and notably the principles of humanity, neutrality, impartiality, and independence. But how do these principles operate when decision-making is automated?This article opens with an overview of AI and ADMs in the humanitarian sector, with special attention to the concept of algorithmic opacity. It then explores t...
Paper, 2019
Against the backdrop of the strategic impact of artificial intelligence (AI) in global politics and international relations, the paper analyzes the question of lethal autonomous weapons systems, which in the future, if actually developed, could be able to perform missions autonomously, select targets, and use force without human intervention. Arguments for and against the development of such systems are discussed from the perspective of International Humanitarian Law (IHL), in particular its basic principles and the Geneva Protocol I. The issue is considered in the context of current debates on autonomous weapons under the Convention on Certain Conventional Weapons (CCW), which established a Group of Governmental Experts to deal with the matter in Geneva. Possible diplomatic scenarios that could result from such discussions are also examined.
Acta Globalis Humanitatis et Linguarum ISSN: 3030-1718, 2025
The right to benefit from scientific advances, including new technologies, has always been a fundamental human right. One of these new technologies is artificial intelligence technology. Artificial intelligence is a type of intelligence that emerged in the 1950s and is an inseparable part of the digital revolution. The development of artificial intelligence and its application in many areas of human life, especially in the field of human rights, has transformed the way people live. In previous research, the impact of the use of artificial intelligence on the international human rights system has been examined using an analytical-descriptive research method and using library tools. The results of the present study indicate that the use of artificial intelligence in various instances of rights recognized in multiple generations of human rights (first, second, and third generation) has positive and negative effects and has the potential to be considered as one of the instances of the fourth generation of human rights (the doctrine of technology) and also in order to remove issues and possible negative effects of legal measures at the national and international levels with the aim of rule-making and strengthening the processes of A partnership has been formed.
Palgrave MacMillan, 2023
This book explores the rapidly evolving landscape of artificial intelligence and its impact on human society. From our daily interactions with AI-powered technologies to the emergence of superintelligent machines, the book delves into the potential risks and benefits of this groundbreaking technology. Drawing on real-world examples of AI's pervasiveness in various aspects of our lives, the book highlights the urgent need to protect both human and machine rights. Through an in-depth analysis of two zones of conflict - machines violating human rights and humans violating "machine rights" - the author argues for establishing an “AI Convention“ to regulate the claim rights and duties of superintelligent machines. While some experts believe that superintelligent machines will solve all of humanity's problems, the book acknowledges the potential for disaster if such entities are not aligned with human moral values and norms. The AI Convention could be a crucial safeguard against the unforeseen consequences of unchecked technological advancements. The AI Convention is a thought-provoking and timely exploration of the complex ethical and legal considerations surrounding artificial intelligence. It provides a roadmap for policymakers, technologists, and concerned citizens to navigate the challenges and opportunities of the age of advanced intelligence.
2024
This chapter aims in the presentation of the evolution of AI and robotic technologies with emphasis on those for military use and the main strategic agendas of various superpowers like USA, China, and Russia, as well as peripheral powers. The authors also refer to the uses of such technologies in the battlefield. The chapter also reveals the ethical dimensions of the current military AI technologies. It starts with the Mark Coeckelberg paper, to emphasize his call for a new approach to technoethics. Then, the authors will strive towards the ethical theory Neil C. Rowe, and his propositions for ethical improvement of algorithms. Finally, the authors pose the notions of electronic personhood proposed by Avila Negri, also touching upon the fact the legal debate tends to face an anthropomorphic fallacy. To conclude, Thou Shall Not Kill, the highest ''Levinasian Imperative'' closes the gap of the anthropomorphic fallacy, so our relationship with the killer machines be viewed as asymmetric, non-anthropomorphic, and non-zoomorphic.
Zenodo (CERN European Organization for Nuclear Research), 2023
Artificial Intelligence (AI) has the potential to revolutionize various aspects of our lives, but it also raises significant ethical concerns. This paper examines the impact of AI on selected human rights, such as the right to privacy and freedom from discrimination, and discusses the issues related to the codification and regulation of AI from global and regional perspectives. AI has the potential to enhance human capabilities and improve decisionmaking processes, but it also poses a threat to privacy, bias, and accountability. AI algorithms can perpetuate existing societal stereotypes and discrimination, leading to significant violations of human rights, including the right to equality and non-discrimination. Furthermore, the use of autonomous weapons and drones has raised significant ethical concerns related to human rights. These weapons can potentially cause harm to innocent civilians and violate the right to life. There are ongoing debates about the development and use of these technologies and the need for international regulations to ensure their ethical use. Additionally, with the increasing use of automation and AI in various industries, there are concerns that many jobs may become obsolete, leading to significant job loss, and violating the right to work and a dignified livelihood. The paper also highlights the need for future work in AI ethics, including the development of AI systems that are transparent, explainable, and fair. The paper concludes that while AI has the potential to significantly benefit society, its development and deployment must be guided by ethical principles to prevent its negative impact on human rights.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.