top of page

Stop annoying your customers with "AI"

How lazy chatbots, bulk WhatsApps and misdirected messages are quietly destroying customer trust faster than they save money.





I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.


We have reached a strange moment in marketing. Brands talk about “personalisation at scale” while sending messages that feel less personal than ever. My phone regularly lights up with WhatsApps that start with “Hi Johan” and then continue with the same generic paragraph sent to thousands of people. Banks manage to get my name right, but address me as if I were a woman. Chatbots pretend to have a conversation, but clearly have no idea who I am or what I actually need. None of this is harmless. In a country where trust in institutions is already fragile, careless use of AI does not just fail to impress customers; it actively irritates them and pushes them away.


CONTEXT AND BACKGROUND

For years, marketers have chased efficiency: more reach, more impressions, more messages, all at a lower cost per click. Automation promised to remove drudgery and free people up to think creatively. Now AI is being sold as the next step: smarter targeting, better timing, natural language at scale. On paper, this sounds ideal. In practice, many organisations are using AI to do exactly what they did before, just faster and cheaper. The same old spray-and-pray thinking now has a shinier interface.


South Africa’s context makes this more serious. We live with deep inequality, data breaches, and a long history of people feeling exploited rather than served. When a faceless system gets your basic details wrong, or hounds you with irrelevant offers, it reinforces a sense that you are just another data point to be squeezed. For families worrying about safety, jobs and the future of their children, relentless AI-driven marketing noise feels less like innovation and more like harassment.


INSIGHT AND ANALYSIS

The problem is not the technology itself. AI can be a powerful tool for understanding customers and improving service. The real issue is that many brands are trying to automate their way out of bad strategy and broken data. If your customer records are messy, your segments are vague, and your prompts are superficial, your AI will simply amplify those weaknesses. You end up with plastic messages that sound “clever” but land badly.


Respectful AI marketing starts long before the chatbot script is written. It requires a clear view of who your customers are, what they have actually done with you, and what consent they have given you to contact them. It also demands a thoughtful design of the journey before and after a purchase: how you follow up, when you make an offer, how you ask for feedback, and when you simply leave people alone. AI should support this design, not replace it. Used well, it can help you craft messages that are timely, relevant and honest. Used lazily, it becomes spam at industrial scale.


There is also a deeper psychological dimension. Good marketing makes people feel understood and respected. Bad AI marketing does the opposite. When a system gets your gender wrong, or pushes products that clearly do not fit your life, it signals that the brand has not bothered to know you. Over time, this erodes loyalty far more than any single bad campaign. For young people growing up in a world of constant digital noise, it also shapes their expectations of how technology treats them: as whole human beings, or as targets.


IMPLICATIONS

For business leaders, the first step is to slow down before you automate. Fix your foundations. Make sure you have accurate names, contact details, basic preferences and a record of past interactions. Review your consent practices: do people know what they are signing up for, and can they easily opt out? Then, design a small number of clear, respectful journeys: a welcome sequence, a follow-up after purchase, a gentle re-engagement path. Only once this is in place should you ask AI to help you generate and refine messages.


Marketers need to measure more than clicks and open rates. Unsubscribes, complaints and silent disengagement are signals that something is wrong. If people feel hunted rather than helped, your brand is spending its social capital for short-term gains. In South Africa, where trust is a scarce resource and many households are under pressure, brands that show restraint and genuine care will stand out. They will be the ones parents recommend to their children, not the ones they warn them about.


CLOSING TAKEAWAY

AI should enable better conversations, not louder noise. If your use of chatbots, bulk messages and automated campaigns irritates customers, it is not an AI problem; it is a respect problem. The future of marketing is not about flooding people with perfectly optimised prompts, but about building relationships in which technology quietly supports human insight and empathy. In an age where our children will grow up surrounded by intelligent systems, we have a responsibility to show that these tools can be used with dignity and care. If we stop annoying our customers with AI and start serving them thoughtfully, we may yet rebuild the trust that careless automation has squandered.

Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page