By Johan Steyn, 17 June 2021
In a busy coffee shop, a woman sits alone, headphones in her ears, her eyes focusing on her laptop screen. A stranger walks up to her and greets her by name. He seems charismatic and talkative. She’s really not in the mood to be disturbed, much less to be flirted with, but this guy seems to know so much about her. The stranger’s familiarity makes her feel both intrigued and a bit terrified. “How does he know so much about me?”
What she does not know is that the stranger used technology to instantly find out much about her life. He noticed her sitting two tables from him, took out his smartphone and captured a picture of her face. Next, he opened an application called FindFace and uploaded the picture. In a second he viewed a multitude of pictures matching her face and it enabled him to view her LinkedIn, Facebook and Twitter profiles. He also read her blog and gained insight into her dreams, fears, hobbies and friends.
The human brain is a facial recognition machine. Neurons in the brain’s temporal lobe respond to the distinct features of faces. Babies can recognise faces when they are very young, even when the rest of their visual capabilities are still developing. We have all experienced the embarrassment of recognising a stranger’s face — we know we have seen them somewhere before, but we cannot remember the context.
Did you know we are barely able to recognise faces when the pictures are upside down? Some people are not able to recognise faces at all. It is thought that about 2% of the population struggles to recognise faces. It is referred to as prosopagnosia, also called face blindness.
The development of artificial intelligence (AI) technology is largely based on the working of the human brain. Information processing in biological intelligence inspired what we call artificial neural networks. In the human brain, signals are communicated between millions of neurons in various layers. The way we recognise images and human faces would have been impossible without the light speed of signals travelling the superhighways in our brains.
We can teach computer algorithms, functioning much like this superhighway, to recognise images. The “computer brain” needs a multitude of images fed to it to learn. Think of a self-driving car that can recognise the road, pedestrians and traffic signs. It is based on the millions of similar images it learns from that the algorithms can achieve high levels of accuracy. There are applications for our smartphones that can recognise music, transcribe our voice, identify animals or flowers. The application of this technology is infinite.
How do facial recognition systems work? We need to provide millions of face pictures for it to “learn”. The algorithms look for unique facial features such as the shape of the chin or the distance between the eyes. But here is the problem: the learning data we feed the system is usually filled with bias. We have already seen many cases where the technology struggles to recognise the faces of women or of people with dark skin.
We have seen cases of misidentification and false arrests. We run the danger of automating inequality. Many of the large technology firms decided to hold back on this technology. We read about algorithmic obedience training in China based on surveillance and scoring systems. We need to be serious about the threat posed by this technology. Our biometric future may be that of dragnet surveillance, big data policing and technological police states.
• Johan Steyn is a smart automation and artificial intelligence thought leader and management consultant. He is the chair of the special interest group on artificial intelligence and robotics with the IITPSA (Institute of Information Technology Professionals of SA). He writes in his personal capacity.