The AI industry’s trust crisis around Sam Altman deepened in April 2026 after Ronan Farrow’s reporting. Internal documents and testimonies paint a troubling picture: the man who promised safe AI is seen by insiders as unreliable. That’s according to a sprawling piece in
The New Yorker.
The fallout goes beyond one CEO. It cuts to the core of an AI sector where power, speed, and control collide without mature governance.
“I don’t think Sam is the one who should push the button”
The fiercest criticism comes from inside. Ilya Sutskever, cofounder and long-time confidant of Altman, sounded the alarm back in 2023.
His verdict was unusually blunt: “I don’t think Sam is the guy who should have his finger on the button.”
That line hits the nerve. AI is seen as technology with existential stakes. Whoever controls it must be beyond doubt.
According to internal memos, board members even accused Altman of a pattern of deception: “Sam exhibits a consistent pattern of… Lying.”
That makes trust not an abstraction, but an operational risk.
Trust as AI’s weak link
The key takeaway: AI isn’t just a technical challenge. It’s a governance problem.
One board member put it starkly: “He’s unconstrained by truth.”
And sharper still: “He has a strong desire to please… and a sociopathic lack of concern for consequences.”
Statements like these are rare in Silicon Valley. The debate is no longer about strategy, but character.
At the same time, investors defend Altman for his results.
As one ally puts it: “His mission is measured by numbers.”
This tension between performance and integrity sits at the heart of the crisis.
The ‘Blip’: power beats governance
The 2023 firing and rapid reinstatement of Altman exposed how fragile governance is.
The board said Altman “was not consistently candid in his communications.”
Yet he was back within five days. Why? Because:
- employees threatened to leave en masse
- investors turned up the pressure
- partners like Microsoft demanded stability
The lesson is painfully clear: formal controls lose to economic power.
Safety versus speed: a fundamental clash
The crisis is also about substance. How fast should AI be built—and how safely?
Former safety lead Jan Leike wrote internally:
“We are prioritizing product and revenue… with alignment and safety coming third.”
That puts safety under structural pressure.
Even Altman acknowledged the tension, but opted for pragmatism:
“If you say ‘never say anything you’re not 100% sure about’… you lose the magic.”
Here’s the core choice: accept imperfect truth to accelerate innovation.
Power, capital, and geopolitics
OpenAI’s scale makes everything harder.
Altman is building infrastructure worth hundreds of billions and working with states and autocracies.
An insider describes the impact in near-sci-fi terms:
“We’re building portals from which we’re genuinely summoning aliens.”
The metaphor underscores how unknown—and potentially dangerous—the tech is.
Meanwhile, economic pressure mounts. OpenAI is edging toward a possible IPO and extreme valuations.
Altman himself has said before:
“Someone is going to lose a phenomenal amount of money.”
The mix of geopolitics, capital, and AI speeds decisions and amplifies risk.
Personal failing or system flaw?
The central question: is this about one person?
A former OpenAI researcher frames it structurally:
“He sets up structures that constrain him… and then does away with them.”
That points to a broader cycle:
- leaders create rules
- rules collide with growth
- rules disappear
The AI industry is left leaning on personal leadership instead of institutional checks.
Why this matters to the Netherlands too
The impact is immediate for the Netherlands and Europe.
Dutch companies, governments, and universities are increasingly built on American AI systems.
If trust falters:
- AI adoption slows
- political resistance grows
- dependency risk rises
The European AI Act aims to regulate this, but the crisis shows legislation alone won’t cut it. Governance starts with people.
Conclusion: AI runs on trust—and it’s eroding
The lesson is clear.
AI isn’t just models, chips, and data. It runs on trust in leadership.
Mira Murati’s words sum it up:
“We need institutions worthy of the power they wield.”
That’s exactly where the friction is now.
The technology is outpacing the structures meant to contain it.
And as long as that’s true, AI’s biggest vulnerability isn’t technical—it’s human.