top of page

The AI economy is starting to look like feudalism

If productive power is owned by a tiny elite, stipends won’t fix the deeper problem: control.





We talk about AI as if it were a tool anyone could pick up and use, like electricity or the internet. In reality, the most powerful AI is not sitting on your laptop. It lives inside expensive stacks of data, chips, cloud infrastructure, and proprietary models controlled by a small number of companies. That matters because ownership shapes outcomes. 


When a few players own the productive machinery of the age, everyone else becomes dependent on access, not opportunity. This is where the uncomfortable analogy of feudalism starts to make sense: lords owned the land, and everyone else worked it. In an AI world, the “land” is intelligence at scale, and the “rent” is paid in money, data, attention, and compliance.


CONTEXT AND BACKGROUND

Feudalism was not only an economic system. It was a structure of power. Landlords controlled the means of production, and most people had limited mobility and bargaining power. Modern economies replaced that with wages, broader ownership, and institutions that, at least in principle, offered social mobility.


The digital era brought a new twist: platforms. Instead of owning the whole economy, companies owned the rails that the economy ran on. They did not need to own your business to benefit from your activity; they needed to sit between you and your customers.


AI intensifies that pattern because it moves platforms from distribution to capability. A search engine directs attention. An AI system can write, design, code, negotiate, and decide. If those capabilities are concentrated, then the owners not only mediate markets; they start to mediate work itself.


INSIGHT AND ANALYSIS

The first feature of AI feudalism is the shift from product to permission. You are no longer buying a tool once. You are subscribing to an evolving capability that can be changed, priced, limited, or withdrawn. The rules are often set unilaterally. The relationship is not ownership; it is tenancy.


The second feature is the rise of rent-seeking. When AI becomes the core engine of productivity, access fees and usage charges are not just software costs. They become a tax on doing business. The more indispensable the capability, the less choice you have, and the more value flows upward to those who control the stack.


The third feature is the seduction of “compensation” as a substitute for agency. As job disruption becomes visible, proposals like universal basic income or AI dividends start to sound inevitable. They may well become necessary in some form. But there is a trap here: income support can stabilise consumption while leaving ownership untouched. You can be paid not to starve and still have no meaningful say in how the system is designed, governed, or distributed. A stipend can reduce immediate pain while cementing dependency.


And then there is a cultural question we keep avoiding. Work is not only a pay cheque. It is also identity, contribution, social belonging, and a sense of being needed. A society that replaces work with allowances without rebuilding purpose will not become calm and creative by default. It may become resentful, fragmented, and easy to manipulate.


IMPLICATIONS

If the AI economy is drifting towards feudal dynamics, the correct response is not to romanticise the past or demonise innovation. It is to insist on modern economic principles that prevent permanent dependency.


That means competition policy that treats AI infrastructure and data advantages as strategic, not incidental. It means transparency about how systems make decisions, and clear accountability when they cause harm. It means data rights that limit extraction and prevent users from becoming unpaid fuel for private empires.


It also means a sharper public debate about what “fair distribution” actually means. If the future is framed as “you’ll get an allowance while a handful of firms own the productive engine”, we should not be surprised if the public eventually rejects the deal. The healthier goal is participation: more people able to build, adapt, and own value in an AI economy, not merely consume outputs.


CLOSING TAKEAWAY

The danger of AI is not only that it could replace jobs. The greater danger is that it could replace bargaining power with dependency, turning much of society into renters of intelligence rather than owners of opportunity. If we want a stable, innovative future, we need to fight for agency as much as income: competition, accountability, and pathways to meaningful participation. The question is not whether AI will create wealth. It will. The question is who will own the engine, who will set the rules, and whether ordinary people will still feel they have a stake in the world they are helping to build.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page