By Johan Steyn, 27 October 2021
When I speak on artificial intelligence (AI) at conferences, I usually start by showing the audience three pictures. The first is of computer code, the second is the evil robot from the movie Terminator, and lastly, a picture of a cute-looking, friendly robot.
I ask the audience to vote on the picture that best resonates with their idea of AI. The overwhelming response every time is the evil robot. It is then my task to explain to the delegates that, in fact, the image of computer code should be the correct answer.
Most people tend to fear new forms of technology because of the hype they read in the media and how it is portrayed in films. We tend to fear things we do not understand and we think threaten our existence. The most common fear around AI is that robots will take our jobs away.
Human evolutionary history shows how we tend to make sense of things we do not understand by attributing human traits to them. Ancient humans looked at the stars and named them after the gods. They feared approaching thunderstorms or earthquakes and named them after their deities too.
Archaeological discoveries show that, for thousands of years, humans made small objects from clay or other material of human-like objects depicting fertility or war gods. We tend to make things we do not understand in our image. The attribution of human characteristics, emotions, or intentions to nonhuman entities is a natural tendency of human psychology. This is known as anthropomorphism. The term is derived from two Greek words: anthropos, which means human, and morphe, which means form or shape.
Well-known is the tendency to anthropomorphize AI’s capabilities and inventions. A lot has been written about the anthropomorphic attitudes of the public, including some of the ethical implications (particularly in the context of social robots and their interaction with humans). Though anthropomorphism permeates AI research (that is the terminology used by computer scientists, designers, and programmers), the consequences for epistemology and ethics have received less attention.
In recent studies, it has been found that anthropomorphism can be activated by paying attention to social cues that are situational and that psychiatric problems or brain injury can influence whether or not someone displays it. While their ability to anthropomorphize remain intact in the absence of explicit social cues, a recent study with amygdala-damaged subjects showed that their inability to notice and process socially salient information impaired their spontaneous anthropomorphism of non-human stimuli (inanimate objects such as technology). Amygdala activity has been linked to people’s predisposition to anthropomorphize nonsocial phenomena, such as objects and animals.
It is important to note that while we tend to humanize technology, especially AI, and though it was invented by people, its inner workings are fundamentally opaque to the public. Even while there is a common inclination to attach human-like features and motivations to technology devices, there are numerous flavors of AI-anthropomorphism.
As a society, and especially as leaders in business, our task is to ensure that people understand that AI technology is to our benefit. Job displacements are a real and present danger but with the right approach and planning, we can take people on the journey with us. AI can serve humanity, it can benefit workers. We need to manage the narrative.
• Steyn is the chair of the special interest group on artificial intelligence and robotics with the Institute of Information Technology Professionals of SA. He writes in his personal capacity.