Personalised AI learning: the promise is huge, the inequality risk is bigger
- Johan Steyn

- 7 hours ago
- 4 min read
Without deliberate policy and funding, AI could benefit the already-advantaged first and most.

Audio summary: https://youtu.be/pVuMnfEvqow
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
Personalised AI tutoring is one of the most seductive promises of this decade: every child gets patient, one-to-one support, tailored practice, and instant feedback. In a recent Fortune piece, José Manuel Barroso and Stephen Hodges argue that AI could unlock a “new age of learning”, especially in strained education systems, but only if governments, tech firms, and educators move together. The part we often underplay is the risk side: AI can scale learning support, but it can also scale inequality. If access is uneven, we won’t get a world where everyone learns better. We’ll get a world where the already-advantaged pull even further ahead, faster.
CONTEXT AND BACKGROUND
The inequality question is not theoretical. AI is already amplifying advantage in wealthier places that have better infrastructure, stronger institutions, and greater digital literacy. Reuters recently reported on a UNDP warning that AI could widen the gap between richer and poorer states, potentially reversing decades of convergence in areas including education.
This pattern shows up inside countries, too. Where connectivity is stable, devices are available, and teachers are trained, AI tools become “learning accelerators”. Where those basics are missing, AI becomes another reminder of what you don’t have. The Guardian captured this fear as a looming “social divide”, warning that children without AI and computing literacy may lose agency in a future shaped by automated decisions.
Governments are starting to respond, but unevenly. The UK, for example, has announced a plan to trial safe AI tutoring tools aimed at disadvantaged pupils, explicitly framing it as a way to level the playing field for those who cannot afford private tutors. The intent is right. The scale and execution are the real tests.
INSIGHT AND ANALYSIS
The big mistake is to treat AI personalisation as a “software rollout” rather than an equity programme. If AI tutoring becomes something affluent families buy privately, it will function like paid tutoring has always functioned: a multiplier of advantage. The novelty is that it could become cheaper, always available, and more effective with time. That is precisely why inequality could widen quickly.
There are three quiet gatekeepers who will decide outcomes:
First, access. Devices, connectivity, and power reliability are not optional extras. If a learner cannot reliably get online, “personalised learning” becomes occasional learning. Second, readiness. AI tools only help when teachers and parents know how to use them well, and when schools can integrate them into learning design rather than treating them as shortcuts. Third, trust. If families and educators fear surveillance, data misuse, or opaque decisions, adoption will be patchy and politicised.
This is where a broader point from the Financial Times matters. Anthropic has warned that richer countries’ greater use of AI risks deepening inequality, because productivity gains accrue to those with the skills and resources to use these systems effectively. In education, the same principle applies: the learners with the best scaffolding will get the greatest benefit from AI, and they will improve faster, compounding the advantage.
There is also a design risk. If AI tutors are optimised only for performance metrics, they may narrow learning into “what is tested” rather than “what is understood”. For disadvantaged learners, that can become a cruel trade: short-term score improvements without the deeper confidence and capability that education is meant to build.
IMPLICATIONS
For policymakers, the priority is to treat AI personalisation as public infrastructure. If the goal is equity, then access, teacher training, and safe deployment frameworks need funding and accountability. It is not enough to say “AI is available”. Availability is not access.
For education leaders, the question is not “which tool?” but “which learning behaviours?” AI should be used to strengthen foundational skills, give more practice, and offer feedback loops, while teachers focus more on motivation, relationships, and critical thinking. If AI becomes a private advantage, schools will be left trying to compete with the best-resourced homes.
For parents, especially those with fewer resources, the message is hopeful but realistic. Many tools are free or low-cost, but they still require guidance, structure, and boundaries. The families who benefit most will be those who use AI to support learning routines, not to outsource thinking.
CLOSING TAKEAWAY
Personalised AI learning could be one of the most powerful equalising forces we’ve ever had, but only if we build it that way. If we do nothing, it will behave like every other advantage: it will go first to those with money, connectivity, and confidence. Fortune is right to frame this as a collective task, not a tech upgrade. The real question for the next few years is simple: will AI become the world’s most scalable tutor, or the world’s most scalable inequality machine? The answer will come down to policy, funding, and deliberate design.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments