top of page

Geopolitics just entered your AI stack

A wave of low-cost Chinese models will force business leaders to confront data sovereignty, compliance, and reputational risk.





A year after DeepSeek rattled assumptions about the cost of building capable AI, Reuters is reporting that a fresh wave of low-cost Chinese models is on the way. For business leaders, the headline is not “cheaper AI”. The real story is that geopolitics has become a design constraint in your technology stack. Model choice is no longer just a technical benchmark exercise or a procurement negotiation. It is an exposure decision that touches data residency, regulatory compliance, security posture, and public trust. The more these models improve and spread, the more leaders will need a sober framework for when and how they can be used without creating avoidable risk.


CONTEXT AND BACKGROUND

Low-cost models change the adoption curve. When capability is affordable, experimentation becomes widespread, and it becomes easier for teams to introduce tools without central oversight. That is exactly why this moment is so sensitive: the speed of adoption can outpace governance.


Across Europe, the debate about “digital sovereignty” is heating up, with policymakers and industry arguing about dependence on foreign infrastructure and platforms. The Financial Times recently covered Google warning the EU against “erecting walls” as Brussels prepares further sovereignty initiatives. This is not an abstract policy conversation. It is the context in which many organisations will decide where their data is processed, which providers are acceptable, and what cross-border controls are required.


There is also a growing acceptance that Chinese models pose a dual challenge: they are a potential risk, but avoiding them entirely may also be a strategic mistake in a world where costs and supply chains are shifting. The Economist captured this tension directly, arguing that Chinese AI presents risk for Europe, but that shunning it carries risks too.


INSIGHT AND ANALYSIS

The first leadership lesson is simple: model origin now matters. Not because every foreign model is automatically unsafe, but because regulated organisations must be able to answer basic questions: Where does our data go? Who can access it? Under which laws can it be compelled? How are updates governed, and can behaviour change without warning?


The second lesson is that “cheap” can become expensive fast. Lower inference costs can tempt teams to push more sensitive workflows into AI: customer interactions, HR screening, contract review, procurement decisions. The problem is that those workflows are precisely where compliance, confidentiality, and explainability matter most. If you cannot prove where the data was processed, how the outputs were generated, and what controls were applied, you may not have a technical problem at all. You have a governance problem.


The third lesson is that this is not just a US-versus-China story. Many countries are now pushing “sovereign AI” approaches, building infrastructure and rules to keep sensitive data within their borders or trusted jurisdictions. Axios recently reported on Pakistan’s move towards sovereign AI as part of a wider global trend. That trend signals what businesses should expect next: more localisation requirements, more sector-specific restrictions, and more scrutiny of providers’ ownership, hosting, and update practices.


IMPLICATIONS

For CEOs and boards, treat AI procurement as a strategic risk decision. The right question is not “Which model is best?” but “Which model is appropriate for which data and which process?” You need a tiering approach: low-risk use cases (drafting internal summaries), medium-risk (marketing content), high-risk (finance, HR, health, regulated client decisions). Not every workflow deserves the same model, the same hosting, or the same controls.


For Chief Information Security Officers and compliance leaders, insist on auditability. If you cannot get reliable logs, clear data handling commitments, and enforceable contractual protections, then the model is not enterprise-ready for sensitive work, regardless of its benchmark scores. This is where reputational risk lives: the public backlash rarely comes from the model being “foreign”. It comes from secrecy, poor disclosure, and avoidable harm.


For policymakers and industry bodies, the pragmatic path is to raise the floor: shared standards for documentation, provenance, evaluation, and incident response. Project Syndicate recently argued that Europe cannot avoid an AI reckoning, pointing to the strategic pressures created by both US and Chinese advances.


CLOSING TAKEAWAY

Low-cost Chinese models will widen access to AI capability, and that is a genuine opportunity. But it also hardens a new reality: geopolitics is now embedded in everyday technology decisions. Leaders who treat model selection as a simple cost-performance trade-off will be surprised by compliance friction, data sovereignty demands, and reputational blowback. The better path is calm and structured: classify use cases, protect sensitive data, demand audit trails, and build a multi-model strategy that balances capability, cost, and trust. In 2026, “Which AI are we running?” has become a leadership question, not a technical one.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page