top of page

AI in orbit: whose laws apply when the server is in space?

As compute leaves Earth, jurisdiction becomes the next battleground for sovereignty, accountability, and trust.


Article link:




The idea of running AI in space is quickly moving from science fiction to a boardroom slide deck. We are now seeing serious public discussion about orbiting “data centres” powered by near-constant solar and cooled by the physics of space, not by scarce water and struggling grids on Earth. But the most important question is not engineering. It is governance. If your data is processed above the atmosphere, whose laws apply, who can compel access, and who is accountable when things go wrong? For countries like South Africa, this is not abstract. Jurisdiction shapes security, privacy, competitiveness, and whether we become users of someone else’s infrastructure or builders of our own future.


CONTEXT AND BACKGROUND

The “data centre in space” idea is no longer fringe. Google has publicly discussed plans under Project Suncatcher to put AI data centres in orbit, explicitly framed as a response to the energy and environmental limits of terrestrial data centres.


Start-ups are also pushing hard. In late 2025, Starcloud reported running an AI model in space, a small but meaningful milestone in demonstrating that modern accelerators can operate beyond Earth. And mainstream outlets are now treating the category seriously, asking whether shifting computing off-planet could reduce energy and water pressure on Earth-based infrastructure.


At the same time, the legal frameworks governing space were not designed for floating server rooms. Space law evolved around states, launches, liability, and shared use. It was not written for commercial clouds, AI governance, data protection, or cross-border law enforcement in orbit. That mismatch is where jurisdictional confusion will grow.


INSIGHT AND ANALYSIS

When people ask, “whose laws apply?”, they often assume there must be a single answer. In reality, space-based computing creates overlapping claims.

First, there is the operator and its home jurisdiction. If a company is headquartered in one country, regulators and courts there may assert control over the company and, by extension, over systems it operates or has the ability to access, even if the hardware is not physically in that country. Second, there is the ground segment. Data is not telepathic. It moves through ground stations, spectrum licences, and terrestrial networks, each governed by national rules and contractual obligations. Third, there is the customer and the data subject. If a South African hospital processes patient records via an orbital service, South African law, ethics, and constitutional expectations do not vanish because the computation happened in low Earth orbit.


Then there is the space layer itself. Much of the current international regime rests on broad principles and state responsibility for national space activities. That creates a practical question: if a private operator hosts AI workloads that harm users or enable surveillance abuse, which state is responsible, and what enforcement is realistic?


This is why some legal commentators are already warning about a widening jurisdictional gap as ambition outpaces law. JURIST recently described the risk of a “jurisdictional grey zone” emerging as advanced technologies move into space faster than governance can keep up.


And the trend line is clear. Legal and regulatory analysts tracking the commercial space sector have highlighted how fast national frameworks, spectrum issues, and space diplomacy are evolving, precisely because governments know strategic advantage now sits in infrastructure and chokepoints.


IMPLICATIONS

For policymakers, the key takeaway is that “data sovereignty” needs an update. It cannot only mean where data is stored. It must include who controls access, which courts can compel action, and what technical assurance exists when the infrastructure is physically unreachable. Governments will need clearer positions on licensing, spectrum, lawful access, auditability, and minimum security standards for any orbital compute service used by critical sectors.


For business leaders, the risk is not just compliance; it is dependency. If AI becomes a utility delivered through orbital platforms controlled by a handful of firms, then pricing, service continuity, and policy shifts elsewhere become direct operational risks. The governance conversation must move from legal boilerplate to board-level scenario planning: outage response, jurisdiction conflict, data classification, encryption, and exit strategies.


For citizens and parents, the stakes include trust. If space-based AI enables more persistent sensing and more scalable processing, we need stronger guarantees about consent, misuse, and redress. Otherwise, “innovation” will feel like extraction.


CLOSING TAKEAWAY

Space as the next data centre might be technically plausible, but its legal consequences are already here. Once compute moves off-planet, jurisdiction becomes a strategic contest: between states, between regulators, and between citizens’ rights and corporate capability. South Africa should treat this as a sovereignty issue, not a novelty. We need practical rules for procurement, critical infrastructure, and accountability that anticipate orbital services, not chase them. If we do not ask “whose laws apply?” now, we will be forced to accept someone else’s answer later.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page