top of page

When coffee machines start making decisions

The real difference between automation and AI is not about speed or efficiency, but about when the machine starts making decisions and creating novelty on our behalf.

ree




I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.


One of the clearest ways I have found to explain the difference between automation and artificial intelligence comes from the work of the historian Yuval Noah Harari. He often uses everyday examples to show why AI is not just “better automation”, but something qualitatively different. I have adapted one of his ideas into a story about coffee machines, and I now use it regularly in my talks and training sessions.


I ask people to imagine a large office building with 15 000 employees and 300 networked coffee machines. At first, it sounds like a simple facilities-management example. But as we push it further, it becomes a way to understand a profound shift: from machines that simply follow instructions to systems that learn, decide and even surprise us. That, in essence, is the jump from automation to AI.


CONTEXT AND BACKGROUND

For most of the industrial age, technology has been about automation. Machines extended our muscles and our routines. A traditional automated coffee machine is already impressive: you press the cappuccino button, it dispenses cappuccino with perfect consistency; a technician has set the water temperature, the grind and the timing. It might keep count of how many cups it has made and flash a warning light when it needs servicing. But it never wakes up with a new idea. It remains a powerful but obedient tool.


AI changes that picture. Modern systems ingest vast amounts of data, learn patterns and make predictions without a human specifying every rule. In our imaginary building, the 300 coffee machines are all connected. They know when they are running low on coffee beans or milk and, together, automatically place orders with the approved suppliers. They can predict when a particular unit is likely to fail and book a service call before it breaks. All of this already goes beyond basic automation. Yet the real difference appears when the machines begin to experiment and make suggestions we did not explicitly request.


INSIGHT AND ANALYSIS

Over time, the AI-enabled coffee machines are not just tracking stock levels and maintenance cycles; they are also learning about people. They notice that on Monday mornings, a particular employee always chooses a stronger coffee and adds extra sugar. They see that on hot days, more iced drinks are sold on certain floors. Now imagine that, based on these patterns, the machine creates a new flavour profile it believes you will enjoy and simply serves it to you without asking. There is a good chance you will love it, because it has quietly learned your behaviour.


That simple move – generating novelty and acting on your behalf – captures the essence of AI in a way that Harari describes so well. Automation waits for you to press a button from a fixed menu. AI quietly redraws the menu, tests options and nudges you towards them. It still operates within the goals we gave it (dispense coffee, keep people happy, minimise waste), but the middle layer of decision-making has moved from humans to machines. We see the same pattern in recommendation systems, navigation apps and credit scoring. No human editor chooses each video on your feed or each route on your GPS; an algorithm does, based on learned patterns.


In a South African context, this matters because those algorithmic decisions increasingly influence real opportunities: which adverts and loan offers you see, how call-centre scripts adapt to your mood, which CVs rise to the top of a digital shortlist. As AI systems make thousands of micro-decisions every second, our role shifts from direct decision-maker to supervisor – assuming we retain any real oversight at all.


IMPLICATIONS

For business leaders, the coffee machine story is both a promise and a warning. On the positive side, AI systems that can anticipate needs, optimise stock, schedule maintenance and personalise services can unlock enormous value. They reduce waste, improve customer experience and free people from repetitive work. On the negative side, if we do not understand how those systems are learning and deciding, we risk handing them too much authority. A coffee experiment is harmless. An automated decision about a customer’s creditworthiness, a medical scheme claim or a worker’s performance is not.


Policymakers and regulators need to recognise that we are no longer dealing only with dumb automation. When AI systems can create and decide, we must ask new questions about transparency, accountability and consent. Did the user agree to be experimented on? Can a person challenge an automated decision that affects their life? Do we know which data was used and how the model arrived at its conclusion?


In a country like South Africa, where trust in institutions is fragile and inequality is deep, invisible systems making consequential choices without clear responsibility lines are a serious concern.


Parents and educators, finally, have a particular responsibility. Our children will grow up surrounded by AI “coffee machines” of every kind: apps, games, tutoring systems and social platforms that learn their behaviour and make suggestions without asking. We must teach them not only how to use these tools, but how to question them. Why is this being recommended to me? Who benefits from this choice? Is this convenient or is it slowly shaping what I like, believe and become?


CLOSING TAKEAWAY

A network of smart coffee machines might sound like a playful thought experiment, but it captures the heart of the AI transition in a way that Yuval Noah Harari has helped many of us to see. Automation was about machines doing what we told them to do, faster and at scale. AI is about machines beginning to choose, to create and to experiment on our behalf, inside the goals we set. That shift is subtle in daily life, but profound in its implications for business, politics and the future of our children. The question is no longer whether we will use AI – that decision has already been made.


The real question is whether we are willing to remain active authors of the goals, limits and values that guide these decision-making machines, rather than passive drinkers of whatever they decide to serve us next.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page