top of page

AI Is Now in the Room Where Laws Are Made — and Democracy Has Not Caught Up

The questions of legitimacy, accountability, and consent that AI-assisted lawmaking raises have no answers yet — and the clock is running



Sign up for my Substack daily AI newsletter here.


See my AI Training course portfolio for corporate Business Leaders here.




In October 2023, the city council of Porto Alegre in Brazil passed what is believed to be the first law in history written entirely by an artificial intelligence system. As the Washington Post reported, the councillors who voted for it did not know it was AI-generated. City Councillor Ramiro Rosário had entered a 49-word prompt into ChatGPT, received a fully drafted bill within seconds, and presented it to his 35 peers without making a single change or disclosing its origin. He later admitted that if he had revealed it before the vote, the proposal certainly would not have been taken to a vote at all.


The council president called it a dangerous precedent. Global Voices, in its detailed account of the episode, noted that there were no legal obstacles preventing AI-written legislation under Brazilian law — meaning the law passed not because of an oversight in the rules, but because the rules had never imagined the question would need to be asked.


That signal has since grown considerably louder. AI tools are now active in legislative drafting processes across the United States, where federal agencies and state offices are adopting AI as a drafting partner for regulation and bill language. The reach extends well beyond American borders. According to the Inter-Parliamentary Union, by 2024 twelve legislative chambers in eight countries plus the European Union reported some use of AI, covering 71 tools ranging from bill drafting and amendments to transcription and classification systems, as the Journal of Politics documents. Italy, Brazil, Estonia, Norway and New Zealand have all deployed AI tools in their legislative workflows. Laws that govern how people live, work, and are treated by the state are being shaped, in part, by machines — and the chain of democratic accountability that is supposed to connect those laws to the consent of the governed has developed a gap that no election, no court, and no constitution has yet learned to close.


CONTEXT AND BACKGROUND

The scale of AI’s entry into the legislative and regulatory process is more advanced than most people realise. In January 2026, ProPublica broke the story that the Trump administration’s Department of Transportation was planning to use Google Gemini to draft federal transportation regulations. The DOT’s general counsel told staff that it should take no more than twenty minutes to get a draft rule out of Gemini, and that the department aimed to compress the regulation-writing process from months to thirty days. His stated quality standard was candid: the goal was not a perfect rule, or even a very good one — it was a rule that was good enough. The department had already used AI to draft an unpublished Federal Aviation Administration rule. DOT’s former acting chief AI officer called the initiative the equivalent of having a high school intern doing the rulemaking. The agency oversees the safety of aircraft, gas pipelines, and freight trains carrying hazardous materials.


This is not an isolated development. As Transformer News documents in its investigation into AI-assisted lawmaking across American legislatures, companies are building tools specifically designed to help legislators analyse and write laws, with clients in all three branches of the US federal government as well as dozens of state and local government entities. State lawmakers and their staff are using AI to draft bill language, research legislation from other states, and generate summaries of hearings — and there are currently no regulations in most US states requiring any disclosure when AI tools contribute to bill language. The assumption built into most legislative processes — that a human wrote what a human submitted — is increasingly a polite fiction.

The volume of legislation being produced is itself a driver of this development.


Tech Policy Press, in its investigation into governments using AI to draft legislation, documents how the Italian Chamber of Deputies has backed a project called GENAI4LEX-B to support legislative research and drafting, how Brazil’s Chamber of Deputies is expanding its AI-assisted Ulysses programme, and how Estonia’s Prime Minister has publicly encouraged parliament to use AI tools to check bills after an AI error in draft legislation allowed online casinos to avoid tax bills, costing the government approximately two million euros per month in lost revenue. The publication notes a finding from Edelman’s annual trust barometer that in eleven out of twenty-eight countries surveyed, governments are already more distrusted than trusted — and that only 29 per cent of British citizens trust their government to use AI accurately and fairly.


Meanwhile, as Lawfare documents in its analysis of AI and the legislative process, artificial intelligence is writing law today — and this has required no changes in legislative procedure or the rules of legislative bodies. All it takes is one legislator, or one legislative assistant, to use generative AI in the process of drafting a bill. The assumption built into most legislative processes — that a human wrote what a human submitted — is increasingly a polite fiction.


INSIGHT AND ANALYSIS

The democratic legitimacy challenge here is not primarily about whether AI produces good or bad legislative text. It is about something more fundamental: the accountability chain that gives law its authority. In democratic theory, laws derive their legitimacy from the consent of the governed — from the idea that elected representatives, accountable to citizens through elections, make the decisions that shape society. When those representatives use AI tools to draft the laws they pass, a question emerges that democratic theory was never designed to answer: who is actually the author of the law? And if the author cannot be fully identified, what does accountability actually mean?


As Taka Alliance News observes in its Medium post, when something goes wrong with a piece of legislation — when language turns out to have unintended consequences, when a definition proves overbroad — the legislative record is supposed to reveal who made which decisions and why. That transparency is the mechanism by which democratic accountability functions after the fact. When AI contributes to the drafting, that mechanism is compromised. The choices embedded in the training data shape the choices embedded in the output in ways that are not visible to the person reading the final draft — and may not be visible to the person who used the tool.


The bias dimension of this problem is equally serious. AI systems are trained on historical data — which means they encode historical power structures, historical assumptions, and historical inequalities. When an AI drafts legal language, it draws on patterns from its training data. A model trained heavily on corporate legal documents will produce language that reflects corporate legal conventions. A model trained primarily on policy papers from particular ideological traditions will produce policy language that reflects those traditions. In countries with South Africa’s specific legal history — where law was used as a precise instrument of racial oppression for decades — the question of whose assumptions are embedded in the machine doing the drafting is not abstract. It is foundational.


IMPLICATIONS

For business leaders, the emergence of AI-assisted lawmaking creates a specific and underappreciated risk: the regulatory frameworks governing their industries may contain errors, inconsistencies, or unintended provisions that no human has fully reviewed before enactment. Compliance with law that was partly written by a machine — and may contain the machine’s blind spots — is a governance challenge that legal and compliance teams have not yet begun to systematically address. The Transformer News investigation notes that there are currently no regulations limiting the use of AI to write laws or requiring legislative text to be drafted by a human. That absence of governance around the governance-making process is itself a material risk for any organisation operating in a regulated environment.


For policymakers and civil society in South Africa, the challenge is to ensure that the conversation about AI in governance begins before the practice is already entrenched. The King V Code’s explicit requirements for board oversight of AI — discussed in a recent article in this series — apply to corporate governance. The equivalent conversation about AI’s role in democratic governance has barely started. South Africa’s constitution demands substantive participation, transparency, and accountability in the making of law. Whether AI-assisted drafting is compatible with those constitutional demands is a question that legal scholars, parliamentarians, and civil society organisations need to engage with now — not after the tools are already in use.


CLOSING TAKEAWAY

Democratic legitimacy is not a procedural formality. It is the foundation on which the authority of law rests. When machines participate in the writing of that law — without disclosure, without accountability frameworks, without any mechanism for citizens to know or challenge the role AI played — something essential in the democratic contract is quietly being renegotiated. That renegotiation is happening now, in legislative chambers across the world, without a public debate proportionate to its significance. For South African leaders, business executives, and citizens who care about the quality of the governance that shapes their lives, the question of who is really in the room where laws are made — and whether the answer includes an algorithm nobody voted for — is one that demands urgent and honest engagement. The technology will not wait for the conversation to catch up. The conversation must begin now.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page