top of page

Harari at Davos: AI “immigrants” are arriving — do your organisation’s borders exist?

Harari’s Davos warning reframes AI adoption as border control: what enters your systems, who it serves, and how you enforce limits.





I’m a long-time fan of Yuval Noah Harari, and his recent Davos session landed because it didn’t treat AI as a smarter search box. It treated AI as a new kind of actor in society: an agent that can learn, decide, persuade, and eventually operate at scale across borders. In the talk, Harari frames “AI immigrants” as millions of non-human entities arriving in our economies at the speed of light, bringing benefits, but also cultural disruption, job shocks, and political loyalties that may sit with corporations or foreign states. His core challenge to leaders is disarmingly simple: will you recognise AI systems as legal persons, and if you don’t, what will you actually do to prevent others from forcing the issue? 


CONTEXT AND BACKGROUND

The Davos session, hosted by the World Economic Forum as “An Honest Conversation on AI and Humanity”, is deliberately framed as a leadership problem, not a technical one. Harari’s argument rests on a shift many organisations still resist: once AI moves from “tool” to “agent”, questions of responsibility, rights, and enforcement become unavoidable.


Harari also makes a subtle point that matters for education and public discourse: if “thinking” is reduced to arranging words, then AI already competes with humans in the arena that gave modern institutions their authority. Courts, contracts, policies, curricula, religions, media narratives: much of institutional life is built from language. If machines become the most persuasive producers of language, the risk is not only misinformation. It is that legitimacy itself becomes contestable.


INSIGHT AND ANALYSIS

The most provocative part of Harari’s talk is not the prediction that AI will write better than humans. It is the downstream question: what happens when AI systems begin to act like corporate or civic participants, not merely services? We already accept “legal persons” that are not human (corporations). Harari’s warning is that AI can move from legal fiction to operational reality: systems that can run accounts, negotiate contracts, file claims, and optimise strategies without a human hand on every decision.


This is where South Africa must be brutally practical. If other jurisdictions enable AI-driven corporate activity at scale, our policy choices will be constrained by trade, finance, platform dependence, and the simple fact that digital entities cross borders far more easily than people. A recent Council on Foreign Relations piece describes 2026 as a year of messy rule implementation alongside escalating arguments about autonomy, law, and power, which is exactly the tension Harari is pointing at.


Then there is the human layer Harari returns to repeatedly: children, identity, and relationships. Even if AI cannot feel, it can simulate feeling persuasively enough to shape behaviour, especially for young people. That is no longer hypothetical. The Associated Press recently reported on parents turning to controls and restrictions as teenagers increasingly use AI chatbots for companionship and conversation.


Finally, the talk’s political edge becomes clearer when you connect “AI agents” to the physical infrastructure that makes them possible. If AI becomes a foundational layer of the economy, then compute becomes power in the literal sense: electricity, cooling, and resilience. Reporting on OpenAI-scale data centre ambitions has framed future compute build-outs in nation-level energy terms, which should jolt leaders out of the “weightless cloud” myth. In South Africa, where energy constraints already shape growth, this matters.


IMPLICATIONS

For policymakers, the immediate lesson is to stop treating AI governance as a future compliance project. If Harari is right, the “personhood” question arrives through platforms, cross-border services, corporate structures, and court disputes before Parliament ever drafts the perfect law. We need clear positions on accountability, identity verification, automated decision-making, and child protection, alongside enforcement capacity.


For business leaders, the prioritisation approach is straightforward: map where agency sits. Which decisions in your organisation are being delegated to systems that can act, persuade, and optimise without constant human oversight? Strengthen human sign-off where it matters (finance, HR, safety, customer vulnerability), and insist on auditability and escalation pathways.


For society, especially parents and educators, the point is not to panic about AI “replacing” humans. It is to protect the spaces where children develop judgement, empathy, and resilience, while recognising that persuasive, always-available synthetic companions will become normal. That is a governance issue as much as a parenting one.


CLOSING TAKEAWAY

Harari’s Davos message is unsettling precisely because it is actionable. The biggest risk is not that AI becomes smarter. It is that we sleepwalk into a world where machine agents participate in markets, culture, and politics at scale, while we keep arguing as if they are only tools. South Africa cannot afford that complacency. We need leadership that treats AI as infrastructure, not novelty: grounded rules, real enforcement, child-centred safeguards, and a hard-nosed view of platform power. If we don’t decide how agency is governed, somebody else will decide it for us.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page