Algorithmic Warfare Applying Artificial
Sign up for access to the world's latest research
Abstract
AI
AI
Algorithmic warfare represents a transformative shift in military operations driven by disruptive technologies such as artificial intelligence, autonomous systems, and big data analysis. This paper explores the implications of integrating intelligent machines within military strategies, examining both the potential for enhanced effectiveness in decision-making and the ethical, moral, and legal complexities arising from their employment. By analyzing different perspectives on the role of intelligent machines, it highlights the significant changes in warfare dynamics and the necessity for a new understanding of human-machine collaboration in the battlefield.
Related papers
Defense & Security Analysis, 2019
Recent developments in artificial intelligence (AI) suggest that this emerging technology will have a deterministic and potentially transformative influence on military power, strategic competition, and world politics more broadly. After the initial surge of broad speculation in the literature related to AI this article provides some much needed specificity to the debate. It argues that left unchecked the uncertainties and vulnerabilities created by the rapid proliferation and diffusion of AI could become a major potential source of instability and great power strategic rivalry. The article identifies several AI-related innovations and technological developments that will likely have genuine consequences for military applications from a tactical battlefield perspective to the strategic level.
Stanford Law and Policy Review, 25, 2014
In this Article, I review the military and security uses of robotics and "un-manned" or "uninhabited" (and sometimes "remotely piloted") vehicles in a number of relevant conflict environments that, in turn, raise issues of law and ethics that bear significantly on both foreign and domestic policy initiatives. My treatment applies to the use of autonomous unmanned platforms in combat and low-intensity international conflict, but also offers guidance for the increased domestic uses of both remotely controlled and fully autonomous unmanned aerial , maritime, and ground systems for immigration control, border surveillance, drug interdiction, and domestic law enforcement. I outline the emerging debate concerning "robot morality" and computational models of moral cognition and examine the implications of this debate for the future reliability, safety, and effectiveness of autonomous systems (whether weaponized or unarmed) that might come to be deployed in both domestic and international conflict situations. Likewise , I discuss attempts by the International Committee on Robot Arms Control (ICRAC) to outlaw or ban the use of autonomous systems that are lethally armed, as well an alternative proposal by the eminent Yale University ethicist, Wendell Wallach, to have lethally armed autonomous systems that might be capable of making targeting decisions independent of any human oversight specifically designated "mala in se" under international law. Following the approach of Marchant, et al., however, I summarize the lessons learned and the areas of provisional consensus reached thus far in this debate in the form of "soft-law" precepts that reflect emergent norms and a growing international consensus regarding the proper use and governance of such weapons.
The revolution and evolution of computers in both the military and non-military communities have been incredible. The difficulty comes in deciding how best to use the hardware and software available. This study will look at the evolution of computers from their inception, through their first military uses, through their present day military uses, and finally at possible future uses. This study will narrow the scope of the research to the evaluation of artificial intelligence and expert computer systems only. This study will begin by looking at the field of computers from a user oriented perspective. The information provided will not break new ground on the technical aspects of hardware or software engineering. Rather, it will review the beginnings of computers with specific attention paid to those areas where the military has played a role. A major goal of this study is to identify current applications of artificial intelligence and expert computer systems that are in the process of research and development ). 2. "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it" was the proposal included in the Dartmouth Conference held at 1956 which was organized by two senior scientists named Marvin Minsky, John McCarthy in order to create important programmes beneficial to the country.. At the conference, McCarthy persuaded the attendees to accept "Artificial
2024
The integration of Artificial Intelligence (AI) into Military Combat Decision-Making Processes (MCDMP) has been capturing the attention of numerous nations and international organisations. This thesis explores the complex realm of military decision-making, often marked by high-stakes situations and time constraints, which can lead to cognitive biases and heuristic-driven errors. Adding new technologies to processes in which critical decisions need to be made will require certain adjustments and approaches by the human operator. Due to the humanitarian impact of the decisions taken, AI integration must be done carefully, addressing potentially hindering factors to ensure that there is a responsible use of these technologies. Some of these surround the human-AI collaboration, specifically the acceptance of the technology, which can impact its usage and development, as suggested by the literature. Our research will employ a multifaceted qualitative approach, combining a review of academic literature, interviews with experts in military science with AI knowledge, and interviews with military personnel to provide a comprehensive understanding of the impressions held by specialists and military personnel regarding AI as a decision-support system (DSS). This study raises awareness of the importance of cognitive constructs in fostering human-AI collaboration and uncovers the current perspectives military combat decision-makers have on using AI technology to aid decision-making. We aim to contribute to the ongoing discussion regarding the challenges and opportunities of integrating AI as a DSS in military operations. We will offer insights that can facilitate a more informed and effective adoption of AI technology in high-stakes contexts. Through the Technology Acceptance Model (TAM) and the Technological Frames theory, we unveil perception, assumptions, expectations and trust as factors that impact the acceptance of AI as a DSS. Thus, enabling the enhancement of the effectiveness of military combat decision-making through the usage of AI tools responsibly.
Indonesian Journal of Interdisciplinary Research in Science and Technology (MARCOPOLO), 2024
This article aims to analyze the urgency of implementing artificial intelligence (AI) in supporting contemporary military operations. This study employs a qualitative descriptive method by analyzing related literature, government policies, and case studies of AI applications in the military domain. The research findings indicate that AI has the potential to significantly enhance the efficiency, accuracy, and effectiveness of military operations. The application of AI can encompass various fields such as intelligence analysis, mission planning, weapons system control, logistics, and training simulation. However, the development and implementation of AI must also consider ethical, security, and regulatory aspects.
Land Forces Academy Review
The paper is a non-technical approach to artificial intelligence (AI), the author claiming no competence in the technology field. It expresses the view of a person interested and concerned about the role of artificial intelligence as an elevator of international power. It is a review of how the United States of America officially recognizes the benefit of artificial intelligence intervention in the military domain and its usefulness for ensuring national security. Artificial intelligence, a booming technological field, influences our existence as individuals, societies, and states. It affects more and more areas of activity, attracts into a continuous mechanism of development and innovation both the state (as the center of decision and political action) and the society (the big companies producing technology and academics), requires the state to develop strategies in the field and puts on the world leaders agenda a new issue of hope and fear at the same time.
Preprint R.G., 2025
Artificial intelligence (AI) is playing an increasingly prominent role in the military sphere, bringing both numerous benefits and key challenges and risks. AI in the military enables increased operational efficiency through autonomous drones and vehicles that can perform complex missions with minimal human intervention and improves the precision of military operations. Artificial intelligence also supports real-time data analysis for faster and more accurate decision-making. Determinants of AI technology applications in the military sphere are the growing demands of national security and the need to keep up with technological advances, allowing countries to gain a strategic advantage and better protect their interests. However, AI applications also come with serious challenges and risks, such as issues of ethics and accountability, as autonomous combat systems can make decisions to use force without direct human oversight. In addition, information systems may be vulnerable to cyber attacks, for which AI technology may be involved, posing additional risks to national security. The development of AI technology, including generative AI, often precedes necessary changes in legal norms. It is necessary to develop an international legal framework to regulate the use of AI in armed conflict to prevent escalation and uncontrolled development of military technologies. Accordingly, the issue of the scope of decision-making of combat systems based on generative artificial intelligence technology is now rapidly growing in importance as an important element in managing the risk of potential threats arising from it.
IGI Global, 2025
Modern AI methodologies are changing the face of warfare by enabling militaries to enhance their operational capabilities and, more importantly, their decision-making in executing the operations. Advanced technology with machine learning, natural language processing, and autonomous systems enable real-time data analysis, predictive modeling, and increased situational awareness on the battlefield. With such problematic challenges facing the military forces, AI integration provides strategic advantages over military departments for rapid responses to emerging threats and resources distribution. However, this evolution raises issues on accountability, bias, and implication of autonomy in decision-making for combat scenarios. Overall, therefore, AI methodologies are going to redefine the nature of warfare and hence it becomes highly important for military leadership to navigate through these opportunities along with the risks arising from such technologies.
Artificial intelligence (AI for short) is on everybody’s minds these days. Most of the world’s leading companies are making massive investments in it. Governments are scrambling to catch up. Every single one of us who uses Google Search or any of the new digital assistants on our smartphones has witnessed first-hand how quickly these developments now go. Many analysts foresee truly disruptive changes in education, employment, health, knowledge generation, mobility, etc. But what will AI mean for defense and security? In a new study HCSS offers a unique perspective on this question. Most studies to date quickly jump from AI to autonomous (mostly weapon) systems. They anticipate future armed forces that mostly resemble today’s armed forces, engaging in fairly similar types of activities with a still primarily industrial-kinetic capability bundle that would increasingly be AI-augmented. The authors of this study argue that AI may have a far more transformational impact on defense and security whereby new incarnations of ‘armed force’ start doing different things in novel ways. The report sketches a much broader option space within which defense and security organizations (DSOs) may wish to invest in successive generations of AI technologies. It suggests that some of the most promising investment opportunities to start generating the sustainable security effects that our polities, societies and economies expect may lie in in the realms of prevention and resilience. Also in those areas any large-scale application of AI will have to result from a preliminary open-minded (on all sides) public debate on its legal, ethical and privacy implications. The authors submit, however, that such a debate would be more fruitful than the current heated discussions about ‘killer drones’ or robots. Finally, the study suggests that the advent of artificial super-intelligence (i.e. AI that is superior across the board to human intelligence), which many experts now put firmly within the longer-term planning horizons of our DSOs, presents us with unprecedented risks but also opportunities that we have to start to explore. The report contains an overview of the role that ‘intelligence’ - the computational part of the ability to achieve goals in the world - has played in defense and security throughout human history; a primer on AI (what it is, where it comes from and where it stands today - in both civilian and military contexts); a discussion of the broad option space for DSOs it opens up; 12 illustrative use cases across that option space; and a set of recommendations for - especially - small- and medium sized defense and security organizations.
ASPI: The Strategist, 2024
The Gaza and Ukraine wars are now giving us some insights into the impact emerging technology might have in future wars. In that regard, China envisages fighting future wars using artificial intelligence (AI) and involving increased information-processing capabilities, rapid decision-making, robot swarms and cognitive warfare.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (10)
- Allen, Greg and Chan, Taniel, Artificial Intelligence and National Security, Belfer Center for Science and International Affairs, Cambridge, July 2017.
- Allen, General (USMC Ret) John R. and Husain, Amir, 'On
- Hyperwar', USNI Proceedings Magazine, Vol. 143, No. 7, July 2017, pp. 30-37.
- Chui, Michael, Manyika, James and Miremadi, Mehdi, 'What AI can and can't do (yet) for your business', McKinsey Quarterly, January 2018, pp. 2-11.
- Hawley, Dr. John K., Patriot Wars: Automation and the Patriot Air and Missile Defense System, Center for a New American Security, Washington, January 2017.
- Ilachinski, Andrew, AI, Robots, and Swarms: Issues, Questions, and Recommended Studies, CNA: Analysis and Solutions, Arlington, January 2017.
- Kania, Elsa B., Battlefield Singularity: Artificial Intelligence, Military Revolution, and China's Future Military Power, Center for a New American Security, Washington, November 2017.
- Kelly, Dr. John E., Computing, cognition and the future of knowing: How humans and machines are forging a new age of understanding, IBM Global Services, October 2015.
- Lewis, Dustin A., Blum, Gabriella and Modirzadeh, Naz K., War-Algorithm Accountability, Harvard Law School Program on International Law and Armed Conflict, Research Briefing, August 2016. Scharre, Paul, Robotics on the Battlefield Part II: The Coming Swarm, Center for a New American Security, Washington, October 2014. Simpson, Thomas W. and Muller, Vincent C., 'Just War and Robots' Killings', The Philosophical Quarterly, Vol. 66, No. 263, 2016, pp 302-322.
- Work, Robert O. and Brimley, Shawn, 20YY: Preparing for War in the Robotic Age, Center for a New American Security, Washington, January 2014.