By Johan Steyn, 12 August 2021
Published by ITWeb: https://www.itweb.co.za/content/dgp45va6a437X9l8
Smart technology will greatly impact our future privacy, as our faces, the way we walk, our mannerisms and body language will be catalogued as identificational data.
Since the 1960s, a major effort has been made to train computers to “see” the human face − to build automated systems for recognising and differentiating faces − commonly referred to as facial recognition technology (FRT).
While computer engineers are working on FRT in order to create more intelligent and interactive machines, businesses and government agencies see the technology as particularly well-suited to “smart” surveillance − systems that automate the labour of monitoring in order to improve their efficacy and reach.
Corporations, law enforcement and state security agencies, all confident of the technology's usefulness and unconcerned about its intricate and potentially disastrous societal repercussions, are driving FRT.
Over the last months, we have witnessed the backlash against FRT policing, especially in the United States, leading to large-scale upheaval and protests. If there is one kind of smart technology which revealed the danger and proclivity of bias, it has been facial recognition. It is a proven fact that these algorithms struggle to accurately identify the faces of females and of people with darker skin tones.
Following protests, IBM stopped selling facial recognition products, and Amazon imposed a one-year moratorium on police use of its facial recognition technology. Amazon hoped its moratorium would provide the government with enough time to implement appropriate rules governing the technology. One might wonder if Amazon's optimism is misguided. IBM also urged for a national conversation on whether and how face recognition technology should be used by domestic law enforcement agencies.
New kinds of human-machine integration promise to improve the effectiveness of surveillance systems and expand their reach across time and place. However, whether these experimental technologies can or should be used to achieve these aims is a point of contention, one that frequently manifests itself in the news and policy debates as a trade-off between “security” and “privacy”.
Automated face recognition and automated facial expression analysis are two separate technological endeavours. Face recognition technology, strictly speaking, considers the face as an index of identification, ignoring its expressive ability and communicative significance in social interaction. This is never more evident than in laws forbidding drivers from smiling in their driver's licence photographs in order to increase computer matching accuracy.
The goal is to leverage the iconicity of face pictures to demonstrate their indexicality, or definite links to actual, embodied people.
Automated facial expression analysis, on the other hand, focuses on precisely what facial recognition technology tries to control for − the various meanings that a single face can convey − and this promises to accomplish what facial recognition technology fails to do: see inside the person by using the surface of the face.
Technologies such as face recognition, automatic licence plate scanners, drones, prediction algorithms and encryption influence us directly. How should we approach concerns of privacy, civil liberties and public safety? How do these technologies affect the way police officers function in our society? How should antiquated privacy rules established for a bygone era be updated?
If one were to compare biometric technology to other security methods now available, it would rank extremely high in terms of interest, comprehension and societal implications. It might be an image of our fingerprint, vein pattern, face, eyes, hand form, or even our voice that is collected by physiologically-based biometric technology. From the perspective of behaviourally-based biometric technology, mannerisms in the way we sign our names or even write on a computer might be recorded.
The problem of biometric technology's societal consequences for the world's citizens is that when it comes to installing and implementing a biometrics-based infrastructure for their country, whether it's for e-passports, e-voting, border security, or a national ID card system, many nations and their various governments are actually very receptive, if not ecstatic.
Many businesses do not reveal how biometric templates are kept, or what security measures they have in place to ensure they are not the target of a cyber attack or threat. Biometric suppliers must make significant efforts to make the different biometric modalities available today and have a strong level of simplicity of use for the end-user in order to counter these worries and anxieties among the public.
Another aspect contributing to the negative impression of biometric technology is that each biometric vendor's procedures for developing biometric devices are proprietary, particularly in terms of the mathematical formulas utilised. In the biometrics business, however, this lack of openness has resulted in a lack of standards and best practices. Perhaps biometric technology would not be seen as a sinister technology in the end if there was a list that could be freely accessible by the public.
We have to accept that our biometric future, our freedom − and perhaps democracy itself − will be greatly influenced by smart technology. Not only our faces, but the way we walk, our mannerism, the way we express with our hands, and body language will be catalogued as identificational data.
We have to debate and think, we have to influence public policy. We have to protect the future privacy of our children.
Johan Steyn is a smart automation and artificial intelligence thought leader and management consultant. He is chairman of the Special Interest Group on Artificial Intelligence and Robotics with the IITPSA (Institute of Information Technology Professionals of South Africa). He writes in his personal capacity.