top of page

Regulate yourself before you’re regulated: why you need an AI policy now

In countries with no AI laws yet, smart organisations are using internal AI policies to control risk, prove accountability and stay ahead of whatever regulators decide next.




I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.

In many jurisdictions, the big AI laws and headline-grabbing regulations are still somewhere between draft and debate. It is tempting for executives to breathe a sigh of relief and say, “We will act when Parliament acts.” That is a mistake. AI is already inside your organisation: in productivity suites, HR platforms, customer engagement tools and analytics systems.

A company AI policy is the organisation’s playbook for using AI safely, legally and productively, especially important where laws are still catching up. It translates high-level ethics and governance ideas into concrete rules, roles and workflows that people can actually follow. The real choice is not between “regulation” and “no regulation”, but between self-governance now and crisis management later.

CONTEXT AND BACKGROUND

Even where there is no dedicated “AI Act”, AI does not sit outside the law. Systems that process personal data, rank job applicants, influence pricing or drive customer decisions are already subject to privacy, labour, discrimination, consumer and cybersecurity rules. When a model leaks sensitive information, embeds bias into hiring, or makes a misleading claim in a marketing campaign, regulators will not accept “we did not have an AI law yet” as a defence. They will ask what controls, if any, the organisation had in place.

In the meantime, shadow AI use is exploding. Staff paste client data into public chatbots, feed internal documents into free tools, and build their own automations with little or no oversight. Vendors rebrand old products as “AI-powered” and quietly roll out new features into platforms you already use. Without an AI policy, there is no shared understanding of what is allowed, what is banned, and who is accountable. Some organisations respond by trying to block everything. Others look the other way. Both approaches are unsustainable in the long run.

INSIGHT AND ANALYSIS

Regulating yourself before you are regulated starts with a simple admission: AI is not an optional extra. It is becoming part of the basic digital plumbing of modern organisations. A responsible response is to design internal rules that sit above individual tools. That is what an AI policy does. It defines why you use AI at all, what principles guide that use, and how those principles are turned into day-to-day decisions.

A practical drafting process usually begins with a cross-functional AI working group or governance committee. Legal, compliance, data, security, HR and business leaders sit together and answer three basic questions. First, what problems are we trying to solve with AI: efficiency, better customer service, new products, risk reduction?

Second, where is AI already in play, formally or informally, and what data and decisions does it touch? Third, which laws, codes and global benchmarks do we want to align with? Many organisations now look to international frameworks such as the OECD AI Principles, UNESCO guidance and emerging standards like ISO 42001 as reference points, even if their own governments have not yet legislated.

From there, the work becomes very concrete. A mature AI policy is not a philosophical essay. It sets out purpose and scope, governance roles, data and model standards, risk assessment and approval processes, rules for transparency and human oversight, training and acceptable use, and mechanisms for monitoring and continuous improvement. Crucially, it describes “how we work here” in plain language, with examples: which tools are approved, which uses are prohibited, when a human must review AI output, and how to escalate concerns.

IMPLICATIONS

Organisations that build this internal AI rulebook early gain three advantages. First, they can innovate with more confidence. Teams know when they are in bounds and when they are not, which reduces fear and confusion. Second, they build credibility with customers, employees and regulators. When the first AI-related complaint or investigation arrives, they can point to a living governance framework rather than a scramble of emails.

Third, they are better prepared for the future law. When national AI frameworks and sector rules eventually land, they can adapt an existing policy rather than starting from zero.

The “wait-and-see” firms, by contrast, face a slower, more painful transition. They will have to retrofit governance under pressure, with systems already in production, contracts already signed, and habits already entrenched. Staff will be used to doing whatever works. Vendors will have sold them black-box systems with little transparency. In that environment, complying with new regulations becomes more expensive, more disruptive and more likely to expose past mistakes.

CLOSING TAKEAWAY

Regulating AI is no longer just the job of governments and international bodies. Every organisation deploying AI is already making regulatory choices, whether it admits it or not. A company AI policy is a way of owning those choices: making them explicit, aligning them with the law that already exists, and anchoring them in the values you want to live by. In societies where trust is fragile, and our children will grow up surrounded by intelligent systems, waiting passively for external rules is not leadership.

The organisations that will earn trust in the AI age are those that choose to regulate themselves before they are regulated, and that treat AI governance not as a box-ticking exercise, but as part of responsible, forward-looking corporate citizenship.

Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page