Cyber Operations, Sovereignty, and AI: A Legal Order Under Pressure
- Isabel Rodenas
- Jan 13
- 6 min read
In November 2023, a cyberattack shut down emergency rooms at numerous hospitals across three U.S. states. No missiles were fired. No bombs fell. Yet ambulances were diverted to other cities, lives were at risk, and professionals were unsure for a long time what sensitive patient data had been involved in the incident. This scenario illustrates the growing risk faced by today's policymakers, lawyers, and IT professionals: cyberspace can cause real-world harm. One could argue that it constitutes warfare, but existing legal frameworks do not yet allow for the classification of such attacks. At the heart of modern strategic and legal debates is the question: When does a cyber operation constitute a “use of force” under international law, and who decides what that means?
Although international law applies to state conduct in cyberspace, key concepts such as the use of force, sovereignty, attribution, and even self-defence remain contested in the context of cyberwarfare. The accelerating role of artificial intelligence compounds these ambiguities, making both cyber attacks and their legal analysis and consequences more complex.
How International Law Governs State Conduct in Cyberspace
The United Nations Charter is the cornerstone of cyberspace, especially Article 2(4), which prohibits states from threatening or using force, unless it is justified by self-defence under Article 51 or authorised by the UN Security Council under Chapter VII. These prohibitions apply regardless of the means used (physical or cyber) because the Charter does not distinguish by weapon type. In addition to the Charter, international law applies to cyberspace. In November 2024, the European Council approved a common understanding among all member states that international law applied to cyberspace.
The law relevant to cyber operations can be grouped into three core domains. First is jus ad bellum, or the legality of the use of force. In other words, can a state lawfully initiate the use of force or self-defence? Second is state responsibility, which refers to the legal consequences arising from a state's wrongful cyber act. Lastly, jus in bello, which refers to the application of International Humanitarian Law—that is, the standards of conduct during armed conflict.
The most significant legal challenge in cyberwarfare is determining when a digital operation constitutes the use of force under Article 2(4). This threshold is critical because it determines whether a state can respond with force or invoke self-defence. According to current international law, a cyber operation constitutes a use of force if its scale and effects are comparable to those of a traditional kinetic attack, resulting in death, physical destruction, or significant damage. Many states align with this effects-based approach, judging each incident based on its particular circumstances.
Unlike wars fought with tanks and artillery, however, cyber operations can cause serious harm without leaving any visible physical destruction. This characteristic challenges conventional interpretations and leaves room for differing legal views. Some states emphasise effects similar to kinetic force, while others are reluctant to expand the concept. This uncertainty is significant because if a state cannot confidently prove that it has been subjected to a use of force or an armed attack, it may be unable to exercise self-defence under Article 51 lawfully. This results in adversaries exploiting legal ambiguity to conduct harmful operations while remaining below internationally established norms.
The Cyber Grey Zone and State Responsibility and IHL
Even if a cyber operation does not reach the threshold of a use of force, it may still violate another state’s sovereignty or constitute an illegal intervention in its affairs. In cyberspace, sovereignty means that states are generally expected to respect the territorial integrity and political independence of other states, even in digital conduct. However, there is no universally accepted definition of what constitutes a sovereignty violation in cyber contexts. Some argue that any unauthorised cross-border cyber intrusion violates sovereignty, while others suggest that only operations with severe consequences do so. These divergent interpretations create a grey zone for state behaviour, ranging from routine espionage and influence operations to highly disruptive campaigns that affect election results. This legal ambiguity creates both risks and incentives. States and non-state actors may calibrate their operations precisely to stay below thresholds that trigger clear legal consequences, even if the effects on the target society are significant.
Any legal analysis of cyberwarfare must first establish the identity of the actor, yet in cyber settings, attribution is notoriously difficult. Attackers can route through multiple countries, use compromised infrastructure, or leverage sophisticated obfuscation techniques. However, legal attribution is a more demanding question than technical attribution, as it involves determining whether conduct can be legally ascribed to a state for the purposes of establishing responsibility and potential countermeasures. This distinction is vital because the legality of responses, including countermeasures that could be harmful, depends on whether the offending action can be legally attributed to a state under international law. Without this link, responses risk violating the same legal norms they claim to enforce.
If a cyber operation occurs during an armed conflict, International Humanitarian Law (IHL) applies. IHL governs conduct during hostilities and requires compliance with principles such as distinction (military targets versus civilians), proportionality (avoiding excessive civilian harm), and taking precautions when planning attacks. However, applying IHL to cyber operations presents unique challenges because digital tools often interact with dual-use infrastructure—networks that serve both civilian and military functions. Furthermore, the effects of an attack may cascade unpredictably, making assessments of proportionality and precaution more difficult. For instance, a cyber operation intended to disrupt military command systems could affect civilian communication networks.
The AI Factor
Artificial intelligence amplifies both cyber operations and legal complexity. AI can not only accelerate military action but also enhance cyberattacks at a speed never seen before, thereby decreasing the time available for human legal and policy analysis. AI-driven tools can produce deceptive content that is nearly indistinguishable from real content, complicating post-incident attribution and evidence evaluation. Furthermore, AI-enabled autonomous defensive systems can act without real-time human oversight, raising questions about responsibility, intent, and control. For example, if a defensive algorithm strikes back (or adapts autonomously), what level of human direction is required to meet legal standards for necessary and proportional self-defence?
These challenges highlight a broader issue: legal frameworks governing state conduct are not flexible enough to account for rapid technological evolution. As AI becomes embedded in cyber offence and defence, states and international bodies must clarify how traditional legal principles apply to autonomous or semi-autonomous decision-making in conflict environments.
Toward Better Governance: Practical Policy Pathways
It will likely take a few years to fully update international law for cyberspace. In the meantime, several short-term measures can significantly strengthen responsible state behaviour and reduce strategic ambiguity. One practical starting point is for governments to publish clearer national positions on how core legal concepts, especially the use of force, sovereignty, and the armed attack threshold, apply to cyber operations. While these statements do not create new law, they help consolidate expectations, enable more predictable signalling in crises, and narrow interpretive gaps that adversaries can exploit.
A second step could be to professionalise shared attribution methods. In cyber conflict, the difference between technical, legal, and political attribution is often where strategy and legitimacy are won or lost. Developing common evidentiary standards, coordinated public messaging, and shared legal frameworks can improve credibility and coordination when responding to malicious cyber operations. These measures can also reduce the likelihood of wrongful countermeasures based on uncertain facts. Building norms to protect critical infrastructure would be key, even when formal treaties are politically unattainable. UN-endorsed norms of responsible state behaviour articulate the expectation that states should protect vital infrastructure from ICT threats and refrain from damaging it. Translating these expectations into operational commitments, especially for essential services such as healthcare, energy, and emergency response, can establish more precise boundaries and facilitate collective reactions in the event of violations.
Finally, the AI dimension increasingly requires governance touchpoints that make cyber capabilities more auditable and legally accountable. Requirements for human oversight, technical record-keeping, and logging are directly relevant when AI is used in defensive or response systems, as they improve traceability and support after-action reviews when decisions carry legal consequences. Emerging governance frameworks and regulations, such as the EU AI Act's human oversight and logging obligations for high-risk systems and the NIST AI Risk Management Framework's emphasis on accountability and transparency, offer concrete design and compliance guidelines that can be adapted to cybersecurity.
These measures involve real trade-offs. For example, transparency can improve legal clarity and coalition alignment, but it may also expose sensitive capabilities and methods. Restraint can reduce the risk of escalation but may also encourage adversaries to act more aggressively below contested thresholds. Navigating these tensions requires sustained diplomatic engagement, credible technical cooperation, and an iterative approach to legal interpretation that keeps pace with the evolution of cyber and AI capabilities in practice. Therefore, the strategic challenge for states and international institutions is not only to force cyber operations into existing legal categories but also to develop interpretive practices and governance mechanisms that can withstand rapid technological change. Those that invest now in clear legal positions, credible attribution, crisis management channels, protections for critical civilian infrastructure, and accountable AI deployment practices will disproportionately influence the emerging norms that determine whether cyber conflict remains contained or escalates into wider confrontation.
