Finding the balance: Why the EU’s AI Act matters – and why innovators are worried
- Johan Steyn

- Dec 13, 2025
- 4 min read
The world’s first comprehensive AI law is a necessary step towards responsible technology, but early signs suggest it may also be slowing European innovation.

Audio summary: https://youtu.be/hoXSViTqnYM
I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.
The European Union’s Artificial Intelligence Act has been hailed as a historic achievement: the first serious attempt by any major bloc to put guardrails around a powerful, fast-moving technology. It offers a model that many other countries, including South Africa, are now studying closely. I believe it is, in many ways, a fantastic piece of legislation: risk-based, focused on fundamental rights, and ambitious about transparency and accountability.
Yet there is a growing unease from European start-ups and investors who warn that, in practice, the law is already chilling innovation. Surveys show that many young AI companies fear slower growth, higher costs and a competitive disadvantage against the United States and China. This tension between necessary regulation and dynamic innovation is not just a European debate; it is a question every country will have to confront.
CONTEXT AND BACKGROUND
The AI Act is built on a tiered approach. Low-risk applications, such as spam filters, face minimal requirements. High-risk uses, such as AI in medical devices, hiring or credit scoring, must comply with strict obligations on data quality, documentation, human oversight and robustness. Some uses, like social scoring by governments, are banned outright. The law came into force in 2024, with key obligations for general-purpose models and high-risk systems being phased in from 2025 onwards.
Crucially, the EU has never claimed that this is an “anti-innovation” project. On the contrary, Brussels insists that clear rules will create trust, legal certainty and a level playing field across the single market. Alongside the law, European leaders have announced very large funding commitments – running into hundreds of billions of euros – to support AI infrastructure, research and start-ups. The message is clear: Europe wants to be both the world’s regulator and a serious competitor in AI.
From South Africa’s perspective, the AI Act arrives at a moment when our own regulatory framework is still taking shape. AI remains largely unregulated here beyond existing laws like the Protection of Personal Information Act and sector-specific rules. Policy papers and discussion documents point towards dedicated AI legislation in future. It is almost inevitable that our lawmakers, and those elsewhere in Africa, will look to the EU for inspiration. The question is how to adapt those principles to our realities of inequality, capacity constraints and very different market dynamics.
INSIGHT AND ANALYSIS
The concern from innovators is not imaginary. Surveys of European AI start-ups suggest that around half expect the AI Act to slow innovation in Europe, and a significant minority have considered relocating parts of their development outside the EU or abandoning some AI projects entirely. Industry groups and leading start-ups have publicly called for delays or a pause in implementation, warning that complex rules, high compliance costs and the threat of heavy fines disproportionately hurt smaller companies.
At the same time, policymakers and independent experts caution against accepting the narrative that “regulation kills innovation” at face value. Europe was already lagging behind the US and China in AI investment and platform power long before the AI Act existed. Supporters argue that a human-centric, rights-based framework is essential for long-term, sustainable innovation, especially in sensitive areas like health, education and public services. They also point out that the law is being rolled out gradually, with codes of practice and guidance intended to help firms comply.
The real issue, I believe, is less about whether we regulate and more about how. Poorly designed rules can entrench Big Tech, because only the largest players can afford armies of lawyers and compliance teams. Smart regulation, on the other hand, combines clear obligations with regulatory sandboxes, guidance tailored for smaller firms, and support for experimentation. The EU is trying to move in that direction, but the jury is still out on whether practice will match the rhetoric.
IMPLICATIONS
For European policymakers, the lesson is to stay humble and responsive. The AI Act should not become a monument that cannot be touched; it must be treated as a living framework, adjusted as evidence emerges about its economic and social impact. That means listening carefully to start-ups, SMEs, researchers and civil society, not only to Big Tech lobbyists. It also means ensuring that enforcement is proportionate and predictable, so that innovators are not paralysed by uncertainty.
For countries like South Africa, the temptation will be either to copy-and-paste the EU model or to reject it as “over-regulation”. Both would be a mistake. We should learn from the EU’s principles – risk-based classification, transparency, human oversight, protection of fundamental rights – but design our own agile, sector-specific approaches. We also need to be realistic about state capacity: complex rules mean nothing if regulators lack the skills, funding or independence to enforce them fairly.
Business leaders, educators and public institutions cannot afford to sit back and wait for the perfect law. They should already be setting their own standards for responsible AI: impact assessments, clear governance structures, meaningful human oversight and honest communication with users. The organisations that thrive will be those that see governance as part of innovation, not an obstacle to it.
CLOSING TAKEAWAY
The EU’s AI Act is both a lighthouse and a warning. It shows that democratic societies can act to protect citizens in the face of powerful new technologies, but it also exposes how difficult it is to balance safety, rights and competitiveness.
For South Africa and many other countries, Europe’s experiment offers a valuable starting point rather than a finished blueprint. The real conversation we must have is not “regulation versus innovation”, but how to build smart, context-aware rules that allow responsible AI to flourish. If we get that balance right, we can harness AI for the benefit of our economies and, more importantly, for the future of our children.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments