top of page

Yuval Noah Harari’s Davos warning in 2020 still matters today

The historian’s warning about hackable humans, data colonies and digital dictatorships has become more relevant in the age of generative AI and algorithmic power.

ree




I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.


Yuval Noah Harari is probably my favourite author and thinker when it comes to artificial intelligence. What I appreciate most is that he is not an AI engineer or data scientist. He is a historian and a philosopher, and that perspective matters. He talks less about model architectures and more about power, inequality and meaning. I have read his books Sapiens, Homo Deus and 21 Lessons for the 21st Century many times, and I regularly recommend them to clients, friends and students.


His Davos address, “How to survive the 21st century”, delivered at the World Economic Forum in 2020, has become a touchstone in my own work. I often reference it when I work with customers or speak at conferences. You can watch the talk on YouTube here: https://www.youtube.com/watch?v=gG6WnMb9Fho


Read the full text on the World Economic Forum’s website (https://www.weforum.org/stories/2020/01/yuval-hararis-warning-davos-speech-future-predications/). Five years later, his warning feels even more relevant.


CONTEXT AND BACKGROUND

At Davos, Harari framed the 21st century around three existential threats: nuclear war, ecological collapse and technological disruption. The first two are sadly familiar; the third was less visible to the public at the time. He was not simply talking about robots taking jobs. His concern was that the combination of biological insight, massive computing power and oceans of data could create new forms of social upheaval and inequality. Some people would be pushed into what he called a “useless class” – not because they are worthless as human beings, but because the economy no longer needs their labour.


He also warned that artificial intelligence could divide the world into a small number of technology superpowers and a long tail of “data colonies”. A few countries and corporations would own the infrastructure, the algorithms and the talent. The rest of us would supply raw data, attention and digital labour while depending on imported platforms.


For those of us in South Africa and the broader Global South, this should sound uncomfortably familiar. We have lived through resource extraction and economic dependency; Harari’s point was that similar patterns could now emerge around data and digital systems.


INSIGHT AND ANALYSIS

The centrepiece of his talk is a simple equation: biology times computing power times data equals the ability to “hack humans”. In plain language, if you know enough about how bodies and brains work, if you have enough computing power, and if you collect enough data, you can predict and shape human behaviour at scale. You do not need to implant chips in people’s heads; you just need to track what they click, where they go, what they buy and how they react. Algorithms then decide what news they see, which adverts follow them, how much they pay for insurance, and who gets a loan or an interview.


When Harari said this in 2020, it sounded like a warning about a possible future. In 2025, it feels like a description of the present. Social media platforms, digital advertising networks, recommendation engines and scoring algorithms already nudge us in ways we barely notice. Generative AI systems now make it possible to create personalised synthetic media at scale. Deepfakes, AI-written misinformation and perfectly targeted propaganda bring his idea of “hackable humans” very close to home. In societies with high inequality, low trust and weak institutions, these tools are particularly dangerous.


What I find powerful about Harari is that he refuses to leave these questions to the engineers. As a historian, he reminds us that technologies always arrive in a political and moral context. AI is not just a productivity tool; it is an architecture of power. It determines who is heard and who is ignored, whose data is harvested without consent and whose interests are built into the system from the start. In that sense, his Davos talk is less about gadgets and more about the future of democracy and human dignity.


IMPLICATIONS

So why does a five-year-old speech still matter today? First, because it gives us a simple lens for assessing what we are building. When I work with clients, I often use Harari’s framework to ask basic questions: who owns the data in your organisation? Who controls the algorithms you rely on? Are you using AI to empower people, or merely to monitor and manipulate them more efficiently? These questions cut through the hype and force leaders to confront the ethical and political stakes of their technology choices.


Second, we need to connect his global warnings to local realities. South Africa already faces extreme inequality, chronic unemployment and an education system under pressure. If automation and AI roll out without serious investment in skills and social protection, Harari’s “useless class” will not be a distant possibility; it will describe millions of our fellow citizens. If we adopt foreign platforms without thinking about data rights and digital sovereignty, we will indeed become data colonies, even as we talk about innovation and the so-called Fourth Industrial Revolution.


Finally, Harari’s closing point about nationalism and global cooperation deserves renewed attention. He insists that there is no way to manage nuclear risk, climate change or AI within national borders alone. Loving your country today means fighting for global rules that protect everyone, not hiding behind slogans about sovereignty while technologies race ahead unchecked. For African countries, that means insisting on a seat at the table where AI and data rules are written, rather than passively inheriting standards designed elsewhere.


CLOSING TAKEAWAY

Yuval Noah Harari’s Davos address has stayed with me because it does what good thinking should do: it clarifies, it warns, and it calls us to responsibility. It reminds us that AI and related technologies are not just clever tools, but forces that can reshape economies, politics and the inner lives of human beings. Five years on, his fears about data colonies, digital dictatorships and hackable humans feel less like speculative philosophy and more like the evening news.


As a parent, a citizen and someone who works in AI, I find that deeply unsettling – but also motivating. We cannot undo the technological revolution, but we can still decide how it is governed and who it serves. Harari gives us a language for that conversation. The question is whether we will use it.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page