top of page

The quiet divide LinkedIn’s Skills on the Rise 2026 reveals: those who can work with AI, and those who can’t

LinkedIn’s Skills on the Rise 2026 shows AI literacy rising; without access and support, the gap will widen across South Africa.





LinkedIn’s Skills on the Rise 2026 report caught my attention because it makes a quiet but powerful point: AI literacy is no longer a “nice-to-have” for specialists; it is becoming a mainstream workplace capability. That sounds encouraging, but it also exposes a fault line that many leaders are not ready to name. There is an emerging divide between people who can confidently use AI tools to think, write, analyse, and decide, and people who cannot.


In South Africa, where inequality already shapes who gets opportunities, that divide will not be evenly distributed. If we treat AI literacy as an optional extra, we will quietly bake a new form of exclusion into hiring, promotion, education, and future life chances.


CONTEXT AND BACKGROUND

LinkedIn’s Skills on the Rise 2026 is positioned as a practical signal to professionals and employers: a list of the fastest-growing skills, based on year-on-year growth in skills added to profiles and skills associated with hiring success. It explicitly points to AI moving beyond coding into strategic and business use, including skills like prompt engineering and working with large language models.


At the same time, South Africans are engaging with AI at a faster rate than many might assume. An ITWeb article on Google and Ipsos research reported that 70% of adults in South Africa have used an AI chatbot, and 90% are interested in learning more about AI. That’s significant appetite, but appetite is not the same as capability, especially in the workplace, where outcomes, risk, and accountability matter.


Globally, education leaders are already warning that unequal access to AI and computing education will deepen social divides. The Guardian recently framed this as “Generation AI”, arguing that children who do not learn these skills risk being disempowered in a world where automated decision-making increasingly shapes outcomes.


INSIGHT AND ANALYSIS

When we say “AI literacy”, many people still hear “learning a tool”. That’s too narrow. AI literacy is closer to a modern form of judgment. It is knowing how to ask good questions, how to check outputs, how to spot confident nonsense, and how to use AI to improve the quality and speed of work without outsourcing responsibility.


This is why the divide becomes so real. People who have access to good devices, stable connectivity, time to experiment, and managers who encourage learning will become stronger, faster. They will write better proposals, produce clearer reports, summarise meetings more effectively, and move from blank page to first draft in minutes. People who do not have those conditions will not stand still. They will fall behind, not because they are less talented, but because they have fewer opportunities to practise a skill that is increasingly rewarded.


There is also a second divide: the difference between those who are trained to use AI safely and those who are left to learn by trial and error. In regulated environments, fear often kills adoption. People worry about confidentiality, about making mistakes, about being judged for using “shortcuts”. Without clear norms and leadership support, only the most confident users push through.


We are starting to see organisations respond by formalising AI skills. ITPro reported that Lloyds Banking Group aims to train all employees in AI through an internal academy, tailoring learning to different levels of responsibility and emphasising responsible use. That approach matters because it treats AI literacy as a workforce capability, not a personal hobby.


IMPLICATIONS

For business leaders, the message is uncomfortable but simple: AI literacy will shape performance, and performance will shape opportunity. If you do nothing, you may unintentionally reward the already-advantaged. The fix is not just “more training”. It is ensuring equitable access to tools, time, and support, and making AI literacy a normal part of work rather than a secret advantage.


For educators and parents, the stakes go beyond jobs. AI is becoming part of how young people learn, write, research, and think. The question is whether we teach them to use it wisely. That means critical thinking, verification, and basic understanding of how these systems can mislead.


Finally, policymakers should view AI literacy as national competitiveness infrastructure. When Microsoft’s earnings narrative highlights adoption momentum, it is also signalling where global work is heading. Reuters recently reported Microsoft’s claim of growing Copilot usage, reinforcing that AI capability is being positioned as a standard layer of productivity in organisations.


CLOSING TAKEAWAY

LinkedIn’s Skills on the Rise 2026 is not just a list. It is a warning flare. AI literacy is becoming a baseline capability, and baseline capabilities have a habit of turning into gatekeepers. South Africa cannot afford a future where AI becomes one more quiet filter that sorts people into “opportunity” and “no opportunity”. The response is not panic, and it is not hype. It is practical action: access, training that builds judgement, clear workplace norms, and education that treats AI as something to be understood, not merely used. The future of work will reward those who can work with AI, but our duty is to make sure that future is fair.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page