AI is just a tool – but you will sit in the dock
- Johan Steyn
- 1 day ago
- 3 min read
Even as AI becomes more powerful and autonomous, the law still sees it as a mere instrument – leaving human users and business owners holding all the liability.

Audio summary: https://youtu.be/xkwkXuxmmt8
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.
Public debate often treats Artificial Intelligence as a new life form, poised to slip beyond human control. It is easy to imagine a future where we simply blame an algorithm when things go wrong. The law sees it differently. Courts do not interrogate code; they look for people and companies with duties. As entrepreneurs and professionals in South Africa rush to use AI for marketing, documents, advice and automation, a simple question should guide every experiment: if this goes wrong, who is accountable?
CONTEXT AND BACKGROUND
Behind the hype, the basic legal rule is unchanged: responsibility follows the person or organisation that chooses and controls the tool. Our systems still treat AI like any other instrument, whether a vehicle, spreadsheet or drone. If you use it carelessly and cause harm, you cannot hide behind the technology. Existing rules on negligence, consumer protection and intellectual property already give regulators enough leverage to hold people to account.
INSIGHT AND ANALYSIS
The mythology of the “rogue AI” is convenient because it lets us dodge hard choices. When people say “the algorithm decided”, they are really saying “I delegated my judgment”. Delegation does not remove accountability. If a chatbot gives reckless financial guidance that you pass to a client, it is your professional duty that will be tested. If an AI-generated image copies a protected style or logo and you use it to market your business, you are the one stepping into an intellectual property dispute.
Trust also suffers when AI work is presented as entirely human. Clients notice when reports arrive wrapped in grandiose, over-formal language that no one would use in conversation. Beyond style, AI tools can misquote the law, invent sources and produce confident nonsense. When that nonsense appears under your name, the damage is yours to manage. At the same time, AI-fuelled fraud – from cloned executive voices to convincing deepfakes – is turning ordinary employees into targets. The systems enable the scam, but controls, education and vigilance inside organisations decide whether the money actually moves.
IMPLICATIONS
For policymakers, the task is less about inventing new legal worlds and more about clarifying how existing duties apply in an AI-saturated economy. Clear guidance for small and medium enterprises, concrete examples of common risks and proportionate penalties for abuse would help demystify the landscape without suffocating innovation. The worst outcome would be to create the illusion that the machine, not the human, now carries the blame.
For business leaders, the priority should be governance, not gadgets. Before adopting an AI tool, they should ask: what problem are we solving, what could realistically go wrong, and who is accountable if it does? That means reading terms of use, setting basic rules about prompts and outputs, checking for intellectual property and privacy pitfalls, and training staff to question rather than blindly trust the screen. AI should buy back time by automating drudgery, not smuggle unexamined risks into the heart of the operation.
CLOSING TAKEAWAY
AI will not stand up in court to explain itself. For the foreseeable future, it will remain a remarkably capable but utterly unaccountable tool. The real question is not whether the machines are becoming more intelligent, but whether we are becoming more responsible. If we want the benefits of automation in South Africa and across Africa – from better services to new jobs and opportunities – we must pair it with legal awareness, ethical habits and institutional courage. Our children will inherit these systems; we owe it to them to model a culture where technology is embraced, but responsibility is never outsourced.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net


