top of page

From drones to algorithms: the quiet rise of autonomous warfare

Why AI-enabled weapons, from drone swarms to algorithmic kill lists, should worry us far more than science-fiction killer robots.

ree




I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.


When most people hear “autonomous weapons”, they still picture a metal monster from a Hollywood film. The reality emerging on today’s battlefields is quieter and far more unsettling. In Ukraine, analysts now describe the conflict as the first high-intensity drone war, with AI-assisted drones, loitering munitions and interceptor systems used every day. Recent reporting shows Ukrainian and Russian forces experimenting with AI-guided drones that can track targets visually and continue flying after losing contact with human operators, and major newspapers already talk about “AI-powered drone swarms” entering service.


At the same time, human rights organisations, security think tanks and legal scholars are warning that autonomy is creeping into targeting, decision-support and even nuclear risk. The future of warfare will not arrive as a single killer robot; it will seep in through code.


CONTEXT AND BACKGROUND

Over the past year, there has been an explosion of research on autonomous weapons, swarms and AI-driven warfare. Human Rights Watch’s 2025 report “A Hazard to Human Rights” argues that as more functions are delegated to machines, “meaningful human control” over life-and-death decisions is eroded and accountability gaps widen. The Future of Life Institute’s autonomous weapons project documents real systems already close to lethal autonomy: loitering munitions that can search for radar signals and attack on their own, armed ground robots with autonomous navigation, and increasingly intelligent drones in the air.


On the policy side, think tanks and diplomats are scrambling to keep up. The Arms Control Association has warned that great-power rivalry is slowing progress towards binding limits on lethal autonomy, even as deployment accelerates. A 2025 SIPRI study outlines concrete options for a multilateral regime: bans on fully autonomous targeting of humans, strict requirements for human control over critical functions, and transparency obligations for states developing military AI. Trends Research & Advisory describes a “new era of military AI” and stresses the need to govern these systems before they become entrenched doctrines rather than experimental tools. The window for strict rules is still open, but narrowing.


INSIGHT AND ANALYSIS

Alongside autonomous drones, several other AI-enabled systems are reshaping how war works. The first is algorithmic targeting and “kill lists”. AI is increasingly used to sift satellite imagery, intercepted communications and social media data to generate targeting recommendations and prioritise strikes. In theory, this promises speed and precision. In practice, lethal decisions can end up leaning on opaque models trained on biased or incomplete data, with very little time for humans to interrogate errors. Legal analysts at West Point’s Lieber Institute have argued that when an autonomous or semi-autonomous system misidentifies a target, the law of armed conflict struggles to assign responsibility clearly.


Then there are loitering munitions and swarms. So-called “kamikaze drones” already exist that can patrol an area, identify signatures such as radar or vehicle types and dive without fresh authorisation each time. Add swarming algorithms, and you move from one smart munition to dozens or hundreds of cheap systems coordinating attacks, overwhelming air defences or shielding armoured units. Recent work from the Atlantic Council and reporting in outlets like the Wall Street Journal describe tests of AI-powered swarms and highlight a shift from a few exquisite platforms to intelligent mass.


Autonomy is also advancing in defensive and support systems. AI-driven air and missile defence platforms use machine vision to detect, classify and engage incoming threats, including other drones, in fractions of a second. Humans might set high-level parameters, but in the heat of battle, machines effectively decide what to shoot down. Armed unmanned ground vehicles with autonomous navigation and target recognition are being trialled for patrols and urban combat roles. Uncrewed surface and underwater vessels are acquiring more autonomy in navigation and mine-hunting, and in some prototypes, offensive roles.


In parallel, AI is increasingly used for offensive cyber operations, large-scale phishing and anomaly detection, as well as in command decision-support systems that fuse sensor data, rank threats and suggest courses of action. Nuclear policy scholars warn that such tools can compress crisis timelines and quietly bias leaders towards escalation.


IMPLICATIONS

For South Africa, Africa and the wider Global South, this is not a theoretical debate. As costs fall and arms exports grow, AI-enabled weapons will inevitably spill into regional conflicts and domestic security contexts. An African-focused study from ISS Futures Africa has already asked whether the continent will become the next frontier for autonomous weapons, warning of weak oversight and high risks for civilians. Yet many of our states have limited voice in the Geneva and New York forums where global norms are being discussed. We risk becoming buyers, testing grounds and victims of other people’s military AI, while remaining rule-takers.


This raises simple but uncomfortable questions. Where, in this emerging ecosystem, does meaningful human control truly sit? If an algorithmically generated kill list turns out to be wrong, who should stand in the dock: the commander who clicked “confirm”, the programmer, the vendor, or no one at all? What red lines do we want around autonomous functions in land, sea, air and cyber operations, and how can we write those into international and domestic law? Reports from Human Rights Watch, SIPRI, Trends Research and others converge on a clear message: we need prohibitions on certain uses, strict limits and transparency for everything else, and a serious conversation about responsibility.


CLOSING TAKEAWAY

Over the next decade, warfare will be quietly but profoundly reshaped by algorithms: in the drones we see on social media, and in the targeting software, defensive systems and command centres we never see. The danger is not only that machines might one day decide to kill entirely on their own, but that we gradually accept thinner human judgment in life-and-death decisions, until accountability dissolves into code and complexity.


For countries in the Global South, the choice is stark. We can remain passive markets for weapons designed elsewhere, or we can insist on red lines, demand a seat at the negotiating table and help shape the rules governing military AI. If we fail to do so, others will define the future of war in our name – and our children may live with the consequences.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page