AI agents in law: transparency is the new ethics test
- Johan Steyn

- 2 hours ago
- 4 min read
The profession’s credibility will hinge on explainability, disclosure, and reliable audit logs.

Audio summary: https://youtu.be/W5I80Emgz4Q
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
The legal profession is beginning to face an awkward question: if an AI agent meaningfully contributes to legal work, should anyone be told? Not in vague terms like “we use modern technology”, but in concrete terms: what was done by a machine, what was checked by a human, and what evidence exists that the work was properly verified.
The debate has moved beyond chatbots that draft a clause. We now have “agentic” systems that can plan, break tasks into steps, run multiple attempts, cross-check outputs, and produce something that looks, to a busy lawyer or a time-starved client, like finished work. Tech progress is racing ahead, as reflected in recent reporting on AI agents tackling legal-style tasks more effectively than before. The ethical test is no longer just competence. It is transparency.
CONTEXT AND BACKGROUND
Law runs on trust. Clients assume their lawyer is applying judgment, not merely producing text. Courts assume filings are grounded in real authorities, not confident fabrications. Yet we have already seen what happens when those assumptions collide with generative AI. Recent reporting has highlighted how judges are encountering mistake-filled briefs that include invented citations and misquoted cases, forcing sanctions and public embarrassment. These incidents are not just about sloppy lawyers. They are warning signs of a system that is adopting powerful tools faster than it is building professional muscle memory around verification and accountability.
Regulators and legislators are also starting to respond, but unevenly. In the United States, states are pushing for guardrails as AI-generated content appears in legal disputes and filings. Some jurisdictions are moving towards explicit obligations on lawyers to check AI outputs and disclose certain uses. For example, California lawmakers have advanced a bill aimed at regulating lawyers’ use of AI, including requirements around verification and disclosure in specific contexts.
INSIGHT AND ANALYSIS
Here is the core tension: law is a high-stakes domain, but AI is probabilistic. These systems do not “know” the law the way a professional knows it. They generate plausible responses based on patterns. Sometimes they are brilliant. Sometimes they are wrong in ways that look right. When an AI agent becomes part of the workflow, the risk is not only a bad paragraph. The risk is a chain of decisions: which documents were prioritised, which authorities were selected, which arguments were framed, and which facts were emphasised or missed.
That is why transparency is becoming the new ethics test. A modern legal workflow needs to be auditable. Not to satisfy curiosity, but to answer hard questions when things go wrong: What tool was used? Which version? What prompts or instructions were given? What sources were relied upon? What checks were performed? Who signed off? If you cannot answer those questions, you are not practising with responsible oversight; you are hoping.
Importantly, “explainability” does not mean revealing proprietary model internals. It means being able to give a credible account of process and control. The same way a law firm can show time records, matter notes, and supervision protocols, it should be able to show an AI use record: a simple, defensible trail that demonstrates professional care.
IMPLICATIONS
For law firms, the practical path is clear. Treat AI as a junior assistant that must be supervised, not a silent co-author. Build a basic AI governance pack: a client disclosure position, a matter-by-matter risk assessment, rules on confidential data, and a logging policy that captures inputs, outputs, model versions, and human review steps. Then train the team relentlessly, because ethics is not a document; it is a habit.
For courts and regulators, the goal should not be blanket bans. The goal is predictable standards: when disclosure is required, what minimum records must exist, and what level of human sign-off is non-negotiable. The UK judiciary has already issued guidance to judicial office holders on AI risks, including confidentiality and hallucinations, which signals the direction of travel. The profession should not wait for enforcement to catch up.
For clients and the public, transparency is the protection against a two-tier system where some people get carefully supervised legal work, and others get automated guesswork. If AI is part of the service, clients deserve clarity on what that means for quality, cost, confidentiality, and accountability.
CLOSING TAKEAWAY
The legal profession can absolutely use AI to improve speed, consistency, and access. But credibility is fragile. If clients begin to suspect that advice is produced by a black box with no paper trail, trust will decay quickly, and the backlash will be severe. The next era of legal ethics will be less about whether AI is allowed and more about whether its use is visible, governable, and defensible. In law, it is not enough to be efficient. You must be able to show your working, and you must be able to stand behind it.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net/about



Comments