top of page

The Dark Side of Smart Warfare: The Rising Threat of AI Misuse, Weaponisation, and Misinformation


ree

A New Frontier in Conflict


Artificial intelligence already diagnoses cancers, trades stocks, and writes poetry, but its most consequential arena may be war. In 2022, a video appeared online in which Ukrainian President Volodymyr Zelenskyy was seemingly urged to surrender—a deepfake, quickly debunked, yet proof that AI-generated fabrications can spread across the globe faster than fact-checkers can respond. As AI becomes more sophisticated, so does the risk of it being used to spread misinformation or engage in autonomous warfare. Deepfakes now threaten to destabilise elections, and autonomous weapons, once the realm of science fiction, are now being deployed in real-world conflicts. 


Washington’s spending plans demonstrate the seriousness with which governments are taking this new front. Just a few weeks ago, on July 4th, U.S. President Donald Trump signed the Big Beautiful Bill Act, which introduced an increase in military spending to over $1 trillion for fiscal year 2026. This investment sets a new global record for military spending, raising one question: What does the U.S. government plan to do with such a large sum of money?


As military powers compete to dominate this new technological frontier, it is becoming increasingly important for the public to understand not only the capabilities of AI but also the ethical, strategic, and societal risks it poses. Could autonomous drones make mistakes that lead to civilian casualties? Are governments using AI systems to target specific population groups? These are no longer hypothetical future scenarios; they are real-world concerns that demand immediate attention.


Real-World Applications: From Algorithms to Ammunition 


Across today’s battlefields, AI is no longer an add-on but a decisive element in targeting and logistics. The Pentagon’s long-running Project Maven remains the best-known example: computer-vision models scan hours of drone footage, including video, satellite imagery, and radar signals, to identify objectives on the battlefield, reducing a task that once took days to mere seconds. Since its introduction in 2017, Maven has reportedly been deployed in Iraq, Yemen, Syria, and Ukraine. Supporters argue that this software leads to increased accuracy and, ultimately, fewer casualties; however, severe limitations have been reported. The results of its use in the Ukraine-Russia war were mixed, especially when there is snow or foliage that can hinder target identification. In such circumstances, human analysts continue to outperform AI models. 


Yet even one seemingly small mistake can be fatal. In 2021, a UN Panel of Experts on Libya reported that a Turkish-made Kargu-2 drone likely conducted the first fully autonomous attack without explicit human consent, targeting troops in retreat. Similarly, since the beginning of the Israeli attack on Gaza, the AI used by the IDF prioritises speed and volume of strikes rather than accurate identification of targets, resulting in higher civilian tolls. 


AI also powers less visible but also important tasks. The U.S. Air Force’s PANDA toolkit analyses “sheer amounts of (aircraft) data” to predict part failures, making maintenance and supply records a more straightforward, faster and cheaper process.


AI in the Military: The Benefits


While there are several benefits to AI use in the military, three aspects stand out the most. First is speed: AI systems can analyse large amounts of data in seconds, reducing the OODA Loop (observe, orient, decide, act), which could give commanders time to think and act before the enemy can respond. Second is efficiency and endurance: Unlike people, machines never tire, so they can stand 24-hour watch, fly unmanned patrols, or schedule convoy routes without coffee or sleep, freeing soldiers for higher-level tasks. Finally, its cost effectiveness and force protection: One self-directed drone has the potential to damage comparatively pricier equipment, and they can enter minefields where sending troops could lead to casualties.


AI Misuse: The Downsides


Even with its advantages, AI poses severe risks that require immediate regulation. For example, if the training data is narrow or biased, not only could an AI system misidentify a civilian as an enemy —a problem the International Committee of the Red Cross now lists among the foremost humanitarian risks of military AI —but it could also target specific groups of people based on race or gender. Greater autonomy can also erode human control; a misinterpreted radar return might trigger a pre-programmed response before humans can intervene, which considerably raises the risk of unintentional escalation. AI systems are also vulnerable to cyberattacks in a way traditional warfare never was: an opponent might be able to hack software and steal, manipulate or disrupt military information and strategies. 


Glossy recruitment videos that portray AI as a friendly 'guardian angel' can mask the fact that the same tools intensify surveillance, reinforce power imbalances, and make controversial decisions appear inevitable because 'the system decided'.' While the Geneva Conventions describe civilians as “protected persons”, accountability becomes murky when a potentially fatal decision emerges from software. 


Silicon Valley Meets the Pentagon: Tech Executives as National Security Advisors 


Many of these capabilities are arriving via a tight circle of tech elites, former Silicon Valley executives now embedded within the U.S. Department of Defence. Meta's Chief Technological Officer (CTO), Andrew Bosworth, Palantir's CTO, Shyam Sankar, and OpenAI executives Kevin Weil, Chief Product Officer, and Bob McGrew, former Chief Research Officer, were sworn in as the new Army Reserve Lieutenant Colonels. They are now part of the “Detachment 201” programme, an effort to recruit tech executives who can advise on AI procurement and deployment.

Supporters argue that the programme accelerates innovation, but critics contend that it blurs the boundary between public security and private profit, particularly given that many of these companies have previously been embroiled in controversy over selling user data


Future Perspectives


AI is already transforming the nature of conflict in ways that previous generations could only have imagined. However, these same algorithms can also quickly amplify errors, embed hidden biases and provide adversaries with new forms of leverage. As AI becomes more embedded in the kill chain, four imperatives will define the information sphere and shape the future of warfare.


First, codify red lines now—before machines write them for us. Whether through a binding treaty or coordinated national policies, governments must establish clear rules regarding fully autonomous weapons, acceptable levels of human involvement, and the protection of civilians and critical infrastructure. Once AI-enabled systems have been deployed on a large scale, it will be far harder to roll them back than it would be to set limits beforehand. 


Second, make transparency the default, not the exception. While militaries will never publish their source code, they can release aggregate data on system accuracy, assessments of civilian harm, and corrective measures. Independent audits performed by trusted third parties with security clearances could help verify that algorithms meet agreed safety thresholds without exposing sensitive details.


Third, broaden the coalition shaping AI norms. Defence ministries and tech giants dominate the current debate. Including smaller states, humanitarian organisations, and affected communities will bring blind spots to light and lend legitimacy to any guardrails that may emerge. Robust conflict-of-interest rules and public scrutiny must strike a balance with innovation in warfare.


Finally, invest as much in resilience as in raw capability. This involves making AI systems more robust against spoofing and hacking, developing modes that prioritise human control in uncertain situations, and training AI tools to recognise and resist automation bias. 


If these principles guide policy, the next generation of smart weapons could save lives and shorten wars. However, if they are ignored, the same technology could drag humanity into conflicts where accountability is diffuse, escalation is automatic, and truth itself becomes just another casualty. The window of opportunity is still open, but it is closing fast.

bottom of page