top of page

Universities say they are AI-ready. Many are not.

Global evidence suggests that confidence in higher education is often running ahead of real institutional capability.



Sign up for my Substack daily AI newsletter here.


See my AI Training course portfolio for corporate Business Leaders here.




I spend time with many universities and tertiary institutions, and one pattern keeps repeating itself. A number of them speak with enormous confidence about artificial intelligence. They talk about AI agents, transformation, innovation and readiness. But when you press a little deeper, the substance is often thin. The policy may be vague, the staff capability uneven, the governance immature, and the understanding of what meaningful AI adoption actually requires surprisingly limited.


This is not true of every institution, and some are making thoughtful progress. But globally, the bigger problem is no longer whether universities are aware of AI. It is whether they are honest about how far along they really are. The evidence increasingly suggests that many are mistaking activity for maturity.


CONTEXT AND BACKGROUND

The global higher education sector has moved quickly to respond to AI, at least on the surface. UNESCO reported in September 2025 that around two-thirds of higher education institutions either already had, or were developing, guidance on AI use, which sounds encouraging at first glance. But guidance is not the same as deep readiness. A document can be written far more quickly than an institution can build staff confidence, redesign assessments, align governance, or embed responsible operational practice.


At the same time, students have not waited for universities to catch up. The 2025 HEPI Student Generative AI Survey found that 92% of UK students were already using AI tools in some form, up sharply from the previous year, with 88% saying they had used generative AI in assessments. That statistic alone should force a more sober conversation. If student behaviour has shifted this dramatically, then a slow committee process and a few defensive policy statements are nowhere near enough.


A wider Asia-Pacific view tells a similar story. The 2025 APRU report, Future Universities in a Generative AI World, found that universities were introducing AI in limited areas under their control, especially student services and back-office functions, but still faced major implementation gaps and lacked a shared model of what a genuinely AI-enabled university should look like.


INSIGHT AND ANALYSIS

This is where the arrogance problem comes in. Some institutions are no longer simply experimenting with AI. They are speaking as though they have already mastered it. Yet the global evidence suggests that many faculty members themselves do not feel that level of confidence. The Digital Education Council’s 2025 global faculty survey found that 40% of faculty members were only beginning their AI literacy journey, just 17% saw themselves as advanced or expert, and 80% said there was a lack of clarity on how AI should be applied in teaching within their institutions.


That matters because real AI maturity is not measured by whether a university has a chatbot, a pilot project, or a strategy slide with the word “agentic” on it. Real maturity means leaders understand the technology well enough to make good decisions. It means lecturers know how to redesign teaching and assessment. It means procurement, data governance, ethics, staff development and student support are aligned. It also means being honest about what you do not yet know.


There is also a practical back-office dimension. AI absolutely can improve administrative efficiency, student support, analytics and internal operations. But even here, maturity is often overstated. BCG argued in March 2026 that universities can create value across student success, workforce alignment, research, operations and strategy, while stressing the need for accountable leadership, central governance and a multi-year roadmap to move beyond pilots.


IMPLICATIONS

For university leaders, the lesson is uncomfortable but necessary. Confidence is not capability. An AI policy is not the same as an AI culture. A few pilots are not transformation. Institutions need a more grounded maturity model, one that looks at leadership understanding, staff capability, assessment redesign, operational use cases, governance and student literacy together.


For South Africa and the rest of Africa, this matters even more. Our institutions do not have endless resources to waste on fashionable jargon and shallow implementation. We need practical honesty, not institutional ego. The universities that will do best in the next few years will not be the ones making the boldest claims. They will be the ones asking the hardest questions, learning quickly, and building real competence step by step.


CLOSING TAKEAWAY

Higher education does not have an AI awareness problem anymore. It has an AI maturity problem, and in some cases an AI honesty problem. The danger is not that universities know nothing. The danger is that some think they know far more than they do. In a sector responsible for shaping the future workforce, guiding research, and preparing our children for a very different world of work, that kind of overconfidence is risky. The institutions that matter most in the AI era will not be those that sound the smartest in meetings. They will be those who are humble enough to learn, disciplined enough to build, and honest enough to admit how far they still have to go.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page