top of page
Johan Steyn

BusinessDay: The computer says ‘no’

Across sectors, we are giving more power to AI algorithms to make decisions — but what happens if we lose control over these ‘smart’ technologies?


By Johan Steyn: 12 July 2022


Little Britain is a hilarious television comedy that many readers may be familiar with. Written and performed by David Walliams and Matt Lucas, it consists of a series of satirical sketches involving British people from various walks of life.


Williams plays the character Carol Beer, a bank clerk who ferociously types on her computer keyboard the requests of clients, only to say without emotion “The computer says no.” You have to see it to appreciate how funny it is.


We are in an era where many business leaders are working on implementing the new breed of smart technologies such as artificial intelligence (AI), digital assistants and cognitive automation. One of the vast array of benefits that these technologies offer is the automation of decision-making.


For example, a client sends an email requesting a quote on a product or service. A computer vision program recognises the text and extracts the data. An automated workflow program verifies the identity of the client and the products requested. The status of the client is verified against account status and discount agreements, while internal stock levels are confirmed. An automated reply is sent to the client containing the quote.


The processes for almost all back-office tasks could be automated based on preset parameters for automated decision-making by the algorithms. It is one thing if we task technology to make simple, low-impact decisions. However, unleashing an algorithm to make more important decisions in finance, law enforcement, recruitment, healthcare or even in warfare is another thing altogether.


Automated trading systems have been used in the stock market to create buy and sell orders. Personalised offers customised to the customer’s preferred communication channels are created by banks using this technology. Even the ideal time of day to send a client a message can be determined using predictive behavioural analytics.


Automated systems are being used in several nations to assist human judges’ decisions in the legal arena. In pretrial detention and sentencing decisions, automated risk assessment instruments are used to forecast the likelihood of repeat offences. Even if an offender is eligible for parole, it might be determined by a computer program.


Government and business sectors have seen a considerable expansion of surveillance methods thanks to the use of sensors, cameras, online transactional records and social media. Recently, there has been a huge shift in the ability to monitor large populations instead of just a few individuals. Algorithms can be trained to restrict admission to a building or arrest a suspect based on facial recognition.


In armed confrontations, automated decision-making is a reality. Typically, a drone will provide data to a field commander, who will then determine whether to launch a missile. However, what if the drone could make the kill decision without the need for human approval? The UN held a symposium in Geneva in December 2021 to discuss the relationship between humans and technology in modern warfare. After the claimed first autonomous drone strike in Libya, the gathering addressed the growing reality of AI-enabled combat and autonomous weapons systems.


As we give more and more power to algorithms to make decisions that humans used to make, what will happen if we can no longer remember how to make these decisions or lose total control over the AI?


We can laugh about the hysterical Carol Beer character, but what if the computer says “no” when we desperately need finance or life-saving healthcare?

Comments


bottom of page