When the People Building the Future Are Unconstrained by the Truth
- Johan Steyn

- 19 hours ago
- 6 min read
The New Yorker's year-and-a-half investigation into Sam Altman raises a question that goes far beyond one man: what does it mean when the most powerful figure in artificial intelligence is described by his own colleagues as someone for whom honesty is optional?

Video summary: https://youtu.be/1r5umegiFBY
Sign up for my Substack daily AI newsletter here.
See my AI Training course portfolio for corporate Business Leaders here.
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
The article I want to examine this week is not comfortable reading. It is, however, essential reading for anyone who thinks seriously about the governance, ethics, and future of artificial intelligence. Ronan Farrow and Andrew Marantz spent a year and a half investigating OpenAI and its chief executive, Sam Altman,g for The New Yorker. What they found raises questions that go far beyond one man’s character — questions about accountability, institutional integrity, and who is really in charge of the most consequential technology in human history.
CONTEXT AND BACKGROUND
OpenAI was founded on a specific promise. It was established as a nonprofit with an explicit founding premise: that artificial intelligence could be the most dangerous invention in human history, and that the person leading it would therefore need to be someone of uncommon integrity. That was not marketing language. It was the foundational logic of the entire enterprise. The company attracted some of the brightest talent in the world on the strength of that promise, and billions of dollars from investors who believed safety would be prioritised over profit even when the two came into tension.
In November 2023, OpenAI’s board fired Sam Altman. The stated reason was a lack of candour in his communications with the board. Within days, roughly 95 per cent of employees signed a letter demanding his return. Major investors signalled support. He was reinstated. The board that fired him was reconstituted with members closer to Altman. What actually underpinned that firing — the documented evidence that led senior colleagues to conclude they could not trust their own chief executive — was never made fully public. Until now. Farrow and Marantz reviewed more than 200 pages of internal documents, including memos compiled by OpenAI co-founder Ilya Sutskever, and spoke to more than 100 people with direct knowledge of how Sam Altman does business. What they found is detailed, specific, and disturbing.
INSIGHT AND ANALYSIS
The investigation documents what multiple sources describe as a consistent pattern of alleged deception spanning Altman’s entire career — from his first startup Loopt, through his leadership of Y Combinator, to his tenure at OpenAI. Sutskever spent weeks in the autumn of 2023 compiling roughly 70 pages of Slack messages, HR documents, and analysis about Altman’s behaviour, sending them as disappearing messages because he was, according to the Farrow piece, terrified someone would find them. Think about what that reveals. One of the most powerful figures in global AI was so concerned about leaving a paper trail regarding his own chief executive that he deliberately concealed his own documentation. The memos alleged that Altman had misrepresented facts to executives and board members and deceived them about internal safety protocols. One memo stated directly: Sam Altman exhibits a consistent pattern of lying.
The specific allegations are numerous and documented in the investigation. In December 2022, Altman assured board members that controversial GPT-4 features had been approved by a safety panel. They had not. When a board member asked for documentation, the safety sign-off could not be produced. OpenAI co-founder Dario Amodei documented how Altman allegedly negotiated a clause in the pivotal 2019 Microsoft deal that overrode key provisions in OpenAI’s original charter, and later denied the clause’s existence when confronted. Paul Graham, the programmer who founded Y Combinator and recruited Altman as his successor, told YC colleagues that Sam had been lying to us all the time before his removal.
One anonymous board member told Farrow and Marantz that Altman combines two traits almost never found in the same person: a strong desire to please people, to be liked in any given interaction, and what they described as an almost sociopathic lack of concern for the consequences that may come from deceiving someone. Another described him as unconstrained by truth. These are not the words of disgruntled former employees with axes to grind. They are the words of people who were inside the room, with access to the documents, and who reached their conclusions after sustained, close observation.
What makes the Farrow investigation more than a character study is its structural argument. Altman publicly advocates for AI regulation while privately lobbying against specific safety bills. On the same day The New Yorker published its investigation, OpenAI released a 13-page policy document comparing its vision to the New Deal — a document critics at Fortune and TechPolicy Press described as providing cover for regulatory nihilism, a product pitch dressed as public policy. The pattern Farrow documents — public pronouncements paired with private contradictions — is not incidental to the governance of OpenAI. It appears to be structural.
The broader context matters here. As Farrow noted in his interview, we are living in a moment where there is virtually no appetite for oversight of the AI industry in Washington. OpenAI has aggressively pursued government contracts, including a deal allowing the Pentagon broad use of its technology. It operates in an industry that sits with almost no regulation. The company whose CEO has been described by his own board as someone who is unconstrained by truth is simultaneously one of the most powerful organisations in the world, with its technology embedded in healthcare, defence, education, finance, and public administration.
IMPLICATIONS
I have written previously about the accountability gap in AI governance and the absence of meaningful oversight structures — at the corporate level, the national level, and the international level. The Farrow investigation makes the human dimension of that gap concrete and urgent. It is not simply that the AI industry lacks regulation. It is that the individual who sits at the apex of that industry, and whose company’s technology will shape the lives of billions of people, has been described by his own colleagues, in documented internal communications, as someone who cannot be trusted with the truth. If the foundational premise of OpenAI was that the person in charge needed to be of uncommon integrity, the investigation raises the most direct possible question: has that premise been honoured?
For South African business leaders, boards, and policymakers engaging with OpenAI’s products and platforms, this is not an abstract concern. The organisations that are deploying AI tools built by companies whose leadership culture is documented as described here are making governance decisions without full information. King V requires boards to exercise oversight over the AI systems their organisations use and the providers they partner with. That obligation does not evaporate because the provider is powerful, globally recognised, or embedded in the products your organisation already depends on.
CLOSING TAKEAWAY
Farrow’s investigation does not reach a simple verdict on Sam Altman. It is, as he noted, not a monolithic hit piece. It is a forensic examination of documented allegations, conflicting accounts, and unresolved questions — conducted by the journalist who exposed Harvey Weinstein, with the same methodological rigour and the same refusal to accept official reassurances without evidence. What it does establish, with considerable documentation, is that the most powerful person in the artificial intelligence industry has operated in ways that raise profound questions about trustworthiness, accountability, and the integrity of the safety commitments on which the entire enterprise was founded. That should matter to every person whose life will be shaped by what OpenAI and its peers build next. Which is, in the end, all of us.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He served as a working group member contributing recommendations toward South Africa’s national AI strategy, an initiative by the National Advisory Council on Innovation (NACI), the Council for Scientific and Industrial Research (CSIR), the Human Sciences Research Council (HSRC) and the Department of Science and Innovation. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments