top of page

BusinessDay: Don’t believe your eyes

Distinguishing fake video and audio from the real thing is becoming increasingly difficult as more data is fed into AI neural networks.

By Johan Steyn, 7 June 2022


Ukrainian President Volodymyr Zelensky, the former actor and expert at using digital channels to communicate to his citizens and the world amid the terrible war in his country, recently “appeared” in a video requesting his troops to lay down their arms and surrender to Russia.


Even though the image was lip-syncing, viewers also immediately recognised that his accent was unauthentic and that his head movements didn’t appear genuine. “Deep Fakes” are the latest form of fake news, using intelligent technological platforms.


Machine-learning algorithms and artificial intelligence (AI) are used to create a video from previously footage to fool viewers into thinking it’s real. Analysing voice, gestures, and other aspects of the individual in the source material helps algorithms duplicate facial expressions and demeanour.


Fictitious audio or video will become increasingly accurate as more video or audio data is fed into an AI neural network. Already it’s possible to make a video or audio that’s virtually indistinguishable from the real thing by feeding a neural network a data set containing every public comment made by a person.


When it comes to deep fakes, the vast majority are clearly labelled as such. Bill Hader of Saturday Night Live famously morphed into Al Pacino and Arnold Schwarzenegger in a widely circulated and hilarious video. Jordan Peele, an American actor and director, used deep-fake technology to mimic the facial movements of former US President Barack Obama, warning of the perils of fake news and incorrect information.


It is one thing to use these technologies for a good laugh, but what about the legal implications? What if the “evidence” is provided in a court of law, stating that someone plans to commit the crime they are accused of committing?


In 2019 ​​the World Intellectual Property Organization published the “Draft Issues Paper On Intellectual Property Policy And Artificial Intelligence”, which discusses deep fakes in light of privacy, personal data protection, copyright infringements and the violation of human rights.


The issue of inventorship and ownership, which applies to all forms of intellectual property, is one of the most contentious in the AI community. To whom should the credit go for the invention? Should a person or an AI program be given credit for the idea? Related issues, including infringement, legal responsibility or dispute settlement, could be affected by the question of inventorship or ownership. Why shouldn’t there be some kind of compensation system in place for people whose images and “performances” are used in deep fakes?


Publishers and platforms are being challenged by the rise of fake news and the proliferation of doctored narratives that are propagated by humans and bots online. Technical and human methods that can identify and remove erroneous material are being developed in an effort to reduce the impact of bots on the spread of falsehoods and misinformation.


Are there going to be reliable means in the next 10 years to stop false narratives from taking hold and allow the most correct information to dominate the total information ecosystem? Or will the quality and accuracy of online information worsen thanks to the spread of unreliable, often even hazardous, socially disruptive ideas on the internet?

bottom of page