The public is increasingly finding it difficult to differentiate between what is real and what is artificially generated.
By Johan Steyn, 17 January 2024
This year is poised to be a landmark year in the history of global democracy, with an unprecedented number of citizens — more than a billion — set to cast their votes in various national elections, including a critical presidential election in the US and here in SA.
This momentous occasion, however, is shadowed by a burgeoning threat: the potential misuse of artificial intelligence (AI) to unleash a “tsunami of misinformation”, as aptly described by AI expert Oren Etzioni.
The landscape of election campaigns is undergoing a radical transformation, driven by recent advancements in AI-generated content. Deepfakes, AI-generated voices, and other sophisticated tools are no longer futuristic concepts but present day realities. This technological evolution enables politicians to rapidly disseminate political messages, but it also significantly increases the capacity to fabricate and spread fake images and videos.
The recent emergence of deepfakes in experimental presidential campaign ads is just the tip of the iceberg. More malicious versions could rapidly spread unlabelled on social media platforms, potentially deceiving the public days before the election.
This phenomenon is not confined to any single nation — it is a global issue. Countries around the world are grappling with the implications of AI in their electoral processes. In the US, the presidential election is particularly vulnerable given the widespread use of social media and the prevalence of AI technologies. Similarly, nations like India, Brazil and several European countries, each with its unique political landscapes, are also facing the challenges posed by AI-generated content in their elections.
Social media platforms, where much of the AI-generated content gains traction, are at the heart of this issue. Their policies on moderating AI-created content are crucial in fighting the spread of falsehoods. There is a growing call for these platforms to take greater responsibility in combating the politicisation of truth, requiring transparency in their content moderation practices and a commitment to safeguarding democratic values.
The use of AI platforms in politics has ushered in an era characterised by what can be termed “weaponised mistrust”. This phenomenon represents a significant shift in the political landscape, where the power of AI is harnessed not just for efficiency and personalisation but also for the creation and dissemination of fabricated content that is strikingly convincing.
As a result, the public is increasingly finding it difficult to differentiate between what is real and what is artificially generated. The danger lies not just in the consumption of false information but in the erosion of trust in legitimate sources of information.
When people are constantly bombarded with AI-generated false content, scepticism grows, and the belief in factual, verified information diminishes. This erosion of trust is not a by-product but a targeted outcome of certain political actors or groups to deliberately undermine public confidence in the media, institutions, and even in the democratic process itself.
The 2024 elections will be a watershed moment in understanding the impact of AI on democratic processes. The spread of falsehoods, the resulting mistrust and the politicisation of truth pose significant challenges that need to be addressed with urgency and diligence. The decisions made today in managing AI’s role in politics will set the course for how future elections are conducted and how the democratic discourse is shaped in the age of digital technology.