From flint to code: when our tools start making tools
- Johan Steyn

- Dec 18, 2025
- 4 min read
For the first time in history, our tools can design and deploy new tools with minimal human input.

Audio summary: https://youtu.be/zKBdiw4QZ2E
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.
From the first stone tools and bronze blades to steam engines, factories and smartphones, our tools have always extended our muscles, our memory and our reach. But they were still clearly separate from us: we designed them, we operated them, and they remained, in the end, dumb objects. The new generation of AI systems feels fundamentally different. Large language models and so-called “agentic” tools can now write code, chain software together, test and refine their own outputs, and in some cases deploy working systems with limited human supervision.
Predictions that these tools will become “a thousand times more powerful” in the coming years may be speculative, but the direction of travel is obvious. For the first time, the tools we have built are beginning to participate in the design and deployment of future tools.
CONTEXT AND BACKGROUND
If you look back across history, each technological leap changed who had power and what kind of work humans did. Fire and flint altered survival. The plough reshaped societies around agriculture. The first Industrial Revolution moved labour from fields to factories. In the twentieth century, automation and computing changed white-collar work just as radically as machines had changed manual labour. Yet in each case, humans still authored the blueprints and rules.
Machines built things, but they did not decide what should be built.
Software began to blur that line. Factories full of robots already build other machines. Complex code generates other code. However, the logic was still explicitly programmed: carefully specified if-then rules, written by engineers, following human designs. What has shifted with modern AI is that we now have systems that can ingest natural language instructions, interpret messy real-world requirements, and then generate their own plans, code and documentation. They are, in a limited but very real sense, “tool-making tools”.
INSIGHT AND ANALYSIS
This matters because design has traditionally been where humans retained the upper hand. Even as machines took over physical labour and routine office tasks, we reassured ourselves that humans would still be needed for problem framing, process design and creative synthesis. But if AI systems can propose architectures, write entire software modules, integrate them into existing systems and troubleshoot errors, some of that design work is now in play. When AI agents can call other tools, trigger workflows, send emails, generate reports and update dashboards, they are no longer just answering questions; they are acting in the world.
The headline claim that models like ChatGPT will soon be “1000 times more powerful” is less important than how they will be used. We are already seeing early versions embedded in office suites, customer service platforms, coding environments and low-code tools. In a South African business context, that could mean AI systems that automatically design internal processes, draft compliance frameworks or reconfigure call-centre scripts without a human writing every line. Used well, this could reduce drudgery and free people for higher-value work. Used carelessly, it could lock organisations into opaque systems that nobody fully understands.
For children and students, the shift is even more profound. We are moving from tools that simply provide answers to tools that can build personalised learning aids, quizzes, simulations and even small apps on demand. A learner in Johannesburg might soon ask for “a maths game about fractions in Zulu” and receive a custom tool created on the fly by an AI system. That is exciting, but it also raises questions: whose values shape the tools generated, how is the data used, and what happens when the system quietly designs learning experiences that maximise engagement rather than understanding?
IMPLICATIONS
Policymakers need to recognise that we are entering an era where “governing the tools” also means governing the tools that make other tools. Regulations focused only on individual applications will be too slow and too narrow. We need clear expectations around transparency, auditability and human oversight when AI systems are allowed to design or deploy new processes, software or policies. Critical sectors like finance, healthcare and education should not be able to outsource responsibility to an amorphous “AI assistant”.
Business leaders will have to rethink what skills they value. It is no longer enough to say that employees must “learn to use AI”. They will need people who can specify problems clearly, supervise automated systems, question outputs, and understand when a tool-making tool is drifting into dangerous territory. In South Africa, where skills and employment are already under pressure, this is both a threat and an opportunity: we can leapfrog some legacy systems, but only if we invest in real digital literacy, not just in licences for the latest platform.
Parents and educators, finally, must help children navigate a world in which the tools they use are constantly changing themselves. The key skill will not be memorising commands, but learning to ask good questions, understand limitations, and notice when a system is nudging them in subtle ways. We should be honest with young people that these tools are powerful, fallible and shaped by commercial incentives.
CLOSING TAKEAWAY
We have reached a strange moment in our long relationship with technology. For the first time, the tools we have built are beginning to design, assemble and deploy new tools on our behalf. The real risk is not that they suddenly “turn on us” in a science-fiction sense, but that we slip quietly into dependence on systems whose inner workings and emerging behaviours we do not fully understand.
Whether these tool-making tools become partners in human flourishing or engines of deeper inequality will depend on the choices we make now: how we regulate them, how we teach with and about them, and how seriously we take our responsibility to remain, as far as possible, the authors of our own future.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments