top of page
Johan Steyn

BusinessDay: War machines will decide who lives and who dies


By Johan Steyn, 2 November 2021


In the past 10 years, digital and artificial intelligence (AI) technologies have made significant progress. It has an effect on a variety of industries, including healthcare, finance, travel and employment. Military and law enforcement organisations are increasing their efforts in AI research & development.


Once activated and deployed, fully autonomous weapons systems are capable of detecting and attacking human targets. Gunpowder and nuclear weapons were the first two revolutionary forces in combat; lethal autonomous weapons systems (LAWS) are often considered the third.


LAWS have yet to be defined in a way that everyone agrees on. In 2017, at the UN “Convention on the Prohibition or Restrictions on the Use of Certain Conventional Weapons that may be Excessively Injurious or Have Indiscriminate Effects”, the first governmental expert group on LAWS was established.


The rush to develop military weapons has infused technological innovation. As a significant factor in the battlefield of the near future, AI is quickly establishing itself as a worldwide arsenal of choice and self-driving weapons systems have grown in popularity in recent years.


Semi-autonomous weapons are already extensively used, which rely on automation for some components of their system to communicate with a human to approve strike decisions. However, a fully autonomous weapons system can be placed anywhere and respond to changes in the environment and execute attacks without military personnel involvement.


An intelligent machine that is capable of performing any projected warfare task without the involvement or intervention of humans — using only the interaction of embedded sensors, computer programming, and algorithms with the surrounding environment — is fast becoming a reality that cannot be ignored.


What happens if malware or data manipulation is introduced into the system as a result of its reliance on machine learning? When it comes to security, it’s difficult to overlook the reality that connected devices increase the likelihood of cyberattacks executed by foreign governments or terrorist groups. If AI goes to war with itself, cybersecurity implications will pose significant threats to humanity’s long-term survival.


The militarisation of AI is unavoidable as governments bolster their efforts to gain a competitive advantage in science and technology. Building autonomous weapons systems is one thing, but using them in algorithmic combat against other nations or individual targets is quite another. It is important to think about what the future of war would look like.


There is little doubt that autonomous weapons systems will be around for a long time to come. The question is whether AI will be the driving force and determine our strategy for human survival and security in the future.

It creates challenging security considerations for decisionmakers in every country, as well as for the rest of humanity. The ethical, political and legal debate will be centred on the decision to use force and take a human life autonomously.


Some may argue that human soldiers make mistakes due to fatigue, confusion or unclear instructions. We do not always hit what we aim for. AI weapons will almost always hit their targets but are vulnerable to bias and software quality issues.


Robots will undoubtedly form part of the future armed forces with increasing involvement in deciding who to kill or who to spare. We cannot assume that algorithms will execute their decisions with altruism in mind.


• Steyn is the chair of the special interest group on artificial intelligence and robotics with the Institute of Information Technology Professionals of SA. He writes in his personal capacity.

Comments


bottom of page