In an era defined by rapid technological advancements and escalating global conflicts, the integration of artificial intelligence (AI) into warfare has emerged as both a revolutionary opportunity and a profound ethical challenge. As AI and machine learning technologies evolve, they are reshaping numerous aspects of armed conflict—from the development of autonomous weapon systems to their application in strategic decision-making. These advancements promise to transform the conduct of warfare, offering benefits such as increased precision in targeting and enhanced decision-making capabilities. However, they also raise critical ethical and legal questions, particularly concerning the preservation of human control and accountability, and the potential violation of international humanitarian principles such as distinction and proportionality.
AI's role in warfare spans a range of applications, including the automation of military hardware, cyber warfare capabilities, and information warfare strategies. Autonomous weapon systems (AWS), capable of selecting and engaging targets without human intervention, are at the forefront of these concerns. The potential for these systems to operate with minimal human oversight poses significant risks, including unintended consequences and violations of international humanitarian law (IHL). Moreover, AI-driven systems in cyber operations and information warfare introduce new dimensions of conflict, potentially leading to escalations that could severely impact civilian populations.
This blog aims to explore the ethical and legal challenges posed by AI in warfare by examining its practical applications in three key domains: autonomous weapon systems, AI in cyber and information warfare, and the changing nature of decision-making in armed conflict. Following this analysis, we will evaluate the compliance of these technologies with international humanitarian law and propose regulatory measures at the international level.
Examining AI-Enabled Weapons and Autonomous Decision-Making
Lethal Autonomous Weapon Systems (LAWS)
The definition and use of lethal autonomous weapon systems (LAWS) are among the most contentious issues in modern warfare. Despite the lack of a universally accepted definition at the state and international levels, it is clear that many nations are developing and deploying these systems, which could revolutionize the future of warfare. The International Committee of the Red Cross (ICRC) offers a comprehensive definition: "After initial activation or launch by a person, an autonomous weapon system self-initiates or triggers a strike in response to information from the environment received through sensors and based on a generalized target profile."
The autonomy of weapon systems can be categorized into three levels: semi-autonomous, supervised autonomous, and fully autonomous operations. In semi-autonomous operations, machines perform tasks but require human approval to proceed. Supervised autonomous operations allow machines to continue tasks until stopped by a human operator. Fully autonomous operations, however, are beyond human control once initiated. While autonomous systems are used in navigation, intelligence, surveillance, and reconnaissance, this discussion focuses on weapons that autonomously select and engage targets.
According to Autonomous Weapons Watch, 17 weapon systems can operate autonomously, with China, Germany, Israel, South Korea, Russia, Turkey, Ukraine, and the USA leading their development. Thirteen of these systems are unmanned aerial systems, one is an unmanned surface system, and three are unmanned ground systems. Some systems, like Anduril's Altuis 600, explicitly state their autonomous capabilities, while others, such as Turkey’s Kargu-2, have downplayed their autonomy in response to international concerns.
Despite the lack of consensus on regulation, the UN General Assembly took a historic step on December 22, 2023, by adopting the first resolution against LAWS with 152 countries in favour. The resolution affirms that international humanitarian law applies to LAWS and supports the efforts of the UN Group of Governmental Experts to introduce restrictions or prohibitions on such systems. However, more comprehensive regulations are needed.
A two-pronged approach to LAWS regulation, as proposed by several countries, is necessary: prohibiting systems without human control over target selection and engagement, and regulating other systems in compliance with IHL. The Human Rights Council’s resolution of October 7, 2022, highlights the risks of using representative data sets, algorithm-based programming, and machine learning, which may perpetuate structural discrimination, marginalization, and unpredictability. These practices conflict with IHL principles and pose challenges to state responsibility and human accountability.
AI in Cyber Warfare
Autonomy in the cyber domain is not new; automation has long been central to cyber defence, from anti-malware programs to bots conducting Distributed Denial of Service (DDoS) attacks. Cyberweapons often operate autonomously, as exemplified by the Stuxnet virus. AI and machine learning are expected to further transform cyber strategies, enabling capabilities that can autonomously identify and exploit vulnerabilities or counteract threats. Cloudflare's 2024 report warns of the growing prevalence of AI-driven cyberattacks, with AI-assisted hackers capable of exploiting vulnerabilities within 22 minutes of a proof of concept being published.
These advancements could expand the scale and intensity of cyber attacks, with some systems potentially qualifying as "digital autonomous weapons," raising concerns similar to those associated with physical autonomous weapons. AI and machine learning are also increasingly applied in information warfare, enhancing the creation and dissemination of disinformation and misinformation. AI-driven systems can generate convincing fake content, influencing public opinion and decision-making with serious implications for civilians. The proliferation of digital disinformation could lead to wrongful arrests, ill-treatment, discrimination, and even attacks on civilians.
AI in Decision-Making Processes
The integration of AI and machine learning into decision-making processes in armed conflict represents one of the most transformative developments in modern warfare. These technologies enable the extensive collection and analysis of diverse data sources to identify individuals or objects, assess patterns of behaviour, recommend military strategies, and predict future actions or situations. Such systems extend traditional intelligence, surveillance, and reconnaissance capabilities by automating large data set processing and providing recommendations to human operators or autonomously initiating actions based on analyses.
AI-driven decision-making systems can influence critical decisions, such as targeting, detention, military strategies, and even the potential use of nuclear weapons. However, these systems also introduce significant risks, particularly given the limitations of current AI technologies, such as unpredictability, lack of transparency, and inherent biases. From a humanitarian perspective, the deployment of AI in conflict must be scrutinized to ensure compliance with IHL. Decisions impacted by AI—especially those affecting human life or property—must adhere to IHL rules governing hostilities.
The ethical and legal concerns extend to detention decisions, mirroring debates in the civilian sector about human oversight and the accuracy of risk assessment algorithms. Moreover, AI tools could personalize warfare by integrating personally identifiable information from various sources to form algorithmic assessments of individuals, leading to targeted violence, wrongful detention, identity theft, and other humanitarian risks.
Evaluating Ethical Concerns and International Norms
While inter-state efforts to limit AI use, particularly within the framework of LAWS, have yet to conclude, international law remains relevant in this area. Article 36 of Additional Protocol I requires states to assess the legality of new weapons, means, or methods of warfare before deployment. This obligation ensures that new technologies, including LAWS, comply with IHL principles. The Martens Clause further emphasizes the need to adhere to humanity and public conscience in warfare, complementing Article 36 by embedding a moral dimension into weapon assessments.
The fundamental principles of IHL—distinction, proportionality, and precautions in attack—are directed toward human actors, who bear responsibility for adhering to and implementing these norms. Consequently, accountability for violations rests with humans, not machines or algorithms. Combatants must make nuanced, context-specific judgments related to these principles, ensuring that AI systems used in warfare do not undermine human responsibility.
Discussions under the Convention on Certain Conventional Weapons have affirmed the need for "human responsibility" in deploying weapon systems and using force. This consensus among states, international organizations, and civil society groups underscores the importance of maintaining human control to ensure compliance with IHL and ethical standards.
Recommendations for Regulation
To address the challenges posed by AI in warfare, we propose the following recommendations:
Prohibit Autonomous Weapon Systems Without Human Control: Weapon systems that do not allow for sufficient human oversight in target selection and engagement should be prohibited.
Establish Positive Obligations for Human Control: For systems that are not prohibited, establish clear obligations for human control over weapon parameters (e.g., type of target), the environment of use, and human-machine interaction during use.
Ensure Human Command and Control: Any use of weapon systems with autonomous functionalities must be guided and overseen by a responsible chain of human command and control.
Preserve Human Judgment in the Use of Force: Actions that may result in the loss of human life through the use of force should remain under human intent and judgment. Once a human initiates a sequence of actions intended to end with lethal force, autonomous systems may complete the sequence only with ongoing human oversight.
Finally, we must assess whether the current IHL and the UN General Assembly Resolution 78/241 are sufficient or if further development of international law is required to address these evolving challenges.
Conclusion
The integration of AI into modern warfare presents both opportunities and challenges. While AI has the potential to revolutionize warfare by enhancing precision and decision-making, it also raises significant ethical and legal concerns. Ensuring that these technologies comply with international humanitarian law and ethical standards is crucial to preserving human dignity and accountability in armed conflict. As AI continues to evolve, the international community must work together to develop robust regulations that balance innovation with the protection of human rights.
Comments