top of page

Engineered outrage: inside the algorithms of unrest

AI systems are designed to optimise attention, but in doing so they can privilege polarisation, conspiracy and conflict over nuance and common ground.

ree




I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.


A few years ago, I read a book that has stayed with me: Barbara F. Walter’s How Civil Wars Start: And How to Stop Them. Walter’s argument is simple and deeply unsettling: civil wars begin long before the first shot is fired, in the stories we tell about ourselves and our enemies. Narratives of grievance, betrayal and fear slowly harden into identities. Today, those narratives are no longer carried only by radio, television or word of mouth.

They are curated, amplified and sometimes manufactured by artificial intelligence systems that decide what we see on our screens. The question is no longer just which stories are told, but which stories our algorithms learn to feed us, and what that does to already fragile societies.


CONTEXT AND BACKGROUND

Walter’s research shows that civil wars are most likely in “anocracies”: countries that are neither fully democratic nor fully authoritarian. Institutions are weak, elites behave like predatory factions, and politics becomes a struggle over identity rather than policy. In such settings, stories of humiliation and replacement are not harmless; they prepare people to see violence as justified. She wrote about hate radio, partisan broadcasters and early social media as tools that turned resentment into mobilisation.


Since then, the information environment has grown more complex and more automated. Social media platforms are now powered by recommendation engines trained to maximise attention and engagement. These algorithms learn, very quickly, that outrage, fear and moral shock are highly effective at keeping us online. At the same time, generative AI tools can produce convincing text, images, audio and video at near-zero cost. Deepfakes allow synthetic “evidence” of insults or atrocities. Our personal data, harvested at scale, lets political actors target messages at very specific groups. All of this sits on top of the structural risks Walter describes.


INSIGHT AND ANALYSIS

Civil wars start with stories, and today’s stories are filtered by machines. When we open a social media app, we do not see a neutral reflection of reality. We see a curated stream that an algorithm has predicted will keep us clicking. If you tend to engage with posts that confirm your fears about another group, the system will dutifully supply more of them. The result is a personalised echo chamber in which exaggerated or fabricated grievances feel constantly confirmed.


Generative AI supercharges this process. Instead of crafting every message by hand, political entrepreneurs can generate thousands of variations of the same narrative: tailored memes, fake screenshots, synthetic videos and convincing voice notes. Deepfake technology can fabricate a politician’s speech, a community leader’s call to arms, or apparent evidence of police brutality. In a tense environment, the emotional impact arrives long before any fact-check can catch up. For people already primed by their feeds to distrust “the other side”, these synthetic stories slot neatly into an existing worldview.


Privacy erosion adds a darker layer. AI systems can analyse who likes what, who belongs to which group, who lives where, and who is connected to whom. This allows micro-targeting of narratives to particular communities, sects or regions, each receiving slightly different versions of the same divisive story. Walter writes about “ethnic entrepreneurs” who use identity to mobilise. In the AI era, those entrepreneurs have finely tuned tools that can reach precisely the people most likely to respond, while remaining largely invisible to everyone else.


IMPLICATIONS

For established democracies, the risk is not necessarily immediate, large-scale civil war, but a drift towards chronic instability: harassment of officials, normalised political intimidation, sporadic violent incidents and deepening mistrust. The algorithms of unrest do not fire guns; they create a climate in which more people see opponents as enemies, institutions as illegitimate and compromise as betrayal. For fragile democracies and partially free states, especially in parts of Africa, Latin America and Asia, the stakes are even higher. There, the structural conditions Walter describes already exist; AI simply accelerates the journey along that dangerous path.


Responding to this is not just a question of better content moderation. It requires a shift in how we think about AI itself. Recommendation systems, generative models and profiling tools should be treated as part of a country’s information infrastructure, with real consequences for security and social cohesion. Regulators will have to set clearer rules about political advertising, deepfakes and data use.


Platforms must be pushed towards designs that do not reward outrage above all else. And journalists, educators and civil society will need to build new habits of verification and public explanation in a world where images and audio cannot be taken at face value.


CLOSING TAKEAWAY

Walter’s work reminds us that civil wars start with stories. Artificial intelligence does not change that truth; it changes the scale, speed and precision with which those stories can be engineered. The algorithms that curate our feeds and generate our content are not neutral bystanders.


They are active participants in shaping which grievances feel urgent, which rumours feel credible, and which people we come to fear. If we want to protect democracy and spare our children from a future defined by engineered outrage, we have to look beyond the guns and the barricades, and confront the quiet code that decides what we see and believe every day.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page