The new literacy is telling machines what you mean
- Johan Steyn

- 2 hours ago
- 4 min read
In a world of AI assistants, clear instructions and critical evaluation are becoming everyday professional skills.

Audio summary: https://youtu.be/hvFeXNW_8VE
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
One of the most interesting things I’m seeing in boardrooms, training rooms, and casual conversations is that the barrier to using large language models is rarely “tech knowledge”. Its expression. Many smart, experienced professionals struggle because they don’t know how to explain what they want, how to provide the right context, or how to judge whether the answer is actually fit for purpose.
We talk about an “AI skills gap”, but in practice, it often looks like an articulation gap. This article is about that new literacy: the ability to communicate intent clearly to machines, and then apply human judgment to verify, refine, and take responsibility for what comes back.
CONTEXT AND BACKGROUND
For decades, we treated software as something you had to learn in its own language: buttons, menus, workflows, and then, at the far end of the spectrum, code. Large language models change that. You can ask for outcomes in everyday language, and the system attempts to translate intent into drafts, plans, summaries, code, slides, or actions. This is why “language as interface” matters: it lowers the entry barrier, but it raises the requirement for clarity.
The workplace is already responding. A July 2025 Forbes piece on hiring trends framed AI literacy as a top skill employers increasingly value, precisely because the tools are moving into daily work, not just IT teams.
At the same time, education is being pulled into the debate. When students use AI as a shortcut rather than a thinking partner, they risk weakening the very skills they’ll need to supervise AI well. An October 2025 Guardian report captured learners’ concerns about AI eroding study habits and independent problem-solving.
INSIGHT AND ANALYSIS
We should stop calling this “prompting” as if it’s a party trick. The best prompts are simply good briefs. They state the goal, the audience, the constraints, the format, and what “good” looks like. In other words, they are structured thinking made visible. This is why people struggle: many jobs have never required staff to write clear requirements, define success criteria, or iterate on feedback in a disciplined way.
Microsoft’s own guidance for Copilot makes this explicit, breaking prompts down into practical parts such as goal, context, expectations and source material. That is not “engineering”. That is basic communication, with higher stakes, because the system will confidently fill gaps you didn’t notice you left behind.
A November 2025 article in PR Daily put it bluntly: prompt engineering is largely good communication, not mystical technical fluency. I agree. If you can write a strong email brief, a project scope, or a meeting agenda, you can usually learn to get value from these models quickly. If you cannot, the model will expose the gaps by producing generic output, incorrect assumptions, or overly confident nonsense.
This becomes even more important in multilingual environments like South Africa and the broader continent. Yes, modern models can work across many languages. But clarity is still clarity. The issue isn’t whether you prompt in English, isiZulu, Afrikaans, French, or Arabic. The issue is whether you can frame the task precisely, and whether your organisation has the discipline to verify outputs before they touch customers, contracts, hiring, or public services.
IMPLICATIONS
For business leaders, prompt literacy is not a “nice-to-have”. It is a productivity unlock and a risk control. Vague instructions produce wasted time, inconsistent outputs, and accidental policy breaches. Clear instructions, by contrast, make work faster and more reliable, and they create a shared language for quality across teams.
For educators and parents, the implication is uncomfortable but important: writing, reasoning, and critical thinking are becoming core technical skills again. If our children learn to outsource thinking rather than strengthen it, they will be less prepared for a world where the human job is judgment.
For organisations, the practical fix is surprisingly straightforward: treat prompt literacy like Excel literacy. Train it, standardise it, and make it role-based. A September 2025 AI literacy white paper emphasised durable skills like evaluating outputs, framing problems, and balancing human judgement with machine output. That is the skillset that will last beyond any single tool.
CLOSING TAKEAWAY
We are moving into a world where language is increasingly the interface to work. That doesn’t make everyone a programmer, but it does make communication a technical competency with real consequences. The people who thrive will not be those with the cleverest “prompt hacks”, but those who can explain what they mean, provide context, define what good looks like, and then verify the output with calm, professional judgement. If we want AI to lift productivity rather than degrade thinking, we should invest less in hype and more in literacy: the simple, powerful ability to say what we mean.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments