If AI finishes your sentences, who is really speaking?
- Johan Steyn

- 2 days ago
- 3 min read
The more we rely on artificial intelligence to shape our language, the more urgent the question becomes of what remains uniquely ours.

Audio summary: https://youtu.be/X2KR--iHlws
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
One of the most interesting questions in the AI debate is no longer whether machines can write, but what happens to us when they do more and more of the writing for us. If a tool keeps suggesting the next phrase, the smoother sentence, or the safer paragraph, we may become more efficient, but also less distinct. That matters far beyond authors and journalists. It matters in schools, in universities, in business, and in the everyday emails and reports that shape modern work. Writing is not only a way of communicating finished thoughts; it is often the process through which thought is formed. If AI begins to take over that process too easily, then the real risk is not simply bland content. It is that people slowly lose confidence in their own voice.
CONTEXT AND BACKGROUND
Recent commentary has sharpened this concern. An article in The Conversation raises the issue directly: if AI keeps finishing our sentences, our writing may start drifting towards a shared, machine-shaped average rather than reflecting the quirks and depth of individual expression.
This is not happening in isolation. Nature recently highlighted growing concern that AI can “same-ify” human expression, nudging people towards standardised language and even influencing how they think. That phrase is striking because it captures something many of us already sense. Much AI-assisted writing is polished, but often strangely interchangeable.
The education sector is becoming one of the first places where this tension is impossible to ignore. A letter in The Guardian argued this month that AI has exposed long-standing weaknesses in university coursework, especially where polished output is rewarded more than visible thinking. In South Africa and across Africa, where educational systems are already under strain, this question matters deeply.
INSIGHT AND ANALYSIS
The real danger is not that everyone will suddenly become lazy or dishonest. It is subtler than that. AI can make average writing look competent. It can remove friction, uncertainty and struggle. But that struggle is often where genuine thinking happens. A sentence rewritten three times by a human may reveal hesitation, personality and care. A sentence completed instantly by AI may be cleaner, but not necessarily wiser.
That is why this issue reaches beyond authorship into human development. Axios recently reported on the return of “blue-book” exams on some campuses as institutions try to force more original thinking in the AI era. Whether or not that is the right response, it shows that educators are beginning to recognise that writing is not just a product. It is evidence of mental effort, synthesis and judgement.
There is also a workplace implication. In business, AI-generated fluency can create the illusion of competence. A report may sound strong while hiding weak reasoning. A proposal may appear polished while lacking real insight. That matters in leadership, governance and decision-making. If we reward presentation over thought, AI will make it easier to confuse the two.
A useful local signal comes from UCT, where a survey on AI and assessment pointed to how rapidly teaching and evaluation are being reshaped by these tools. The challenge is not simply to police AI, but to rethink what we value.
IMPLICATIONS
For schools and universities, this means designing assessments that reward process, reflection and originality, not only a polished final answer. For parents, it means helping children understand that tools can assist expression, but should not replace the hard work of forming ideas. For business leaders, it means not mistaking AI polish for strategic judgment.
Writers, professionals and students will need new habits. Draft first, then use AI. Question suggestions instead of accepting them automatically. Keep some room for roughness, because roughness is often where authenticity lives. Technology should support human voice, not quietly standardise it.
CLOSING TAKEAWAY
If AI finishes your sentences, the deeper question is not whether the sentence is good. It is whether it is still yours. We should absolutely use better tools, but we should be careful not to outsource the very struggle through which clarity, originality and conviction are built. In a world rushing towards seamless machine assistance, human voice may become more valuable, not less. The task now is to protect it deliberately, especially for our children, because the future will need more than fluent text. It will need people who still know how to think, question and sound like themselves.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments