Mira Murati Says She Couldn’t Trust Sam Altman’s Safety Claims in Musk v Altman Testimony

Mira Murati’s testimony in the ongoing Musk v. Altman trial didn’t just add another chapter to a high-profile corporate dispute—it pulled the spotlight onto a question that sits at the heart of modern AI governance: when leaders talk about “safety,” who actually decides what that means, and how much can teams trust the words coming from the top?

In a video deposition shown in court, the former OpenAI CTO described a moment in which she said she could not rely on Sam Altman’s account of the company’s internal safety process for a new AI model. The exchange, as presented in the deposition excerpt, centered on a specific claim Altman made about whether a proposed model needed to pass through OpenAI’s deployment safety board. Murati testified that Altman’s statement was not truthful as she understood it, and when asked directly whether Altman was telling the truth, she answered “No.”

That single “No” matters because it reframes the dispute from a purely narrative disagreement into something more procedural: not simply what people intended, but what steps were taken—or skipped—inside a system designed to manage risk before models reach users. In other words, the case is not only about personalities or strategy. It is also about process integrity: whether the chain of decision-making that is supposed to protect the public was accurately represented to those responsible for building and overseeing the technology.

Murati’s role at OpenAI gives her testimony particular weight. As CTO, she was positioned close to the technical and operational realities of model development and deployment. That proximity doesn’t automatically make her account definitive in every detail, but it does mean her understanding of how safety gates work is not theoretical. She is describing the lived experience of working inside the organization—how decisions were communicated, how approvals were handled, and what it felt like when leadership’s description of events diverged from what she believed was happening.

The deposition excerpt points to a concrete allegation: Altman reportedly told Murati that OpenAI’s legal department had determined the new model did not need to go through the company’s deployment safety board. Murati’s response suggests that this characterization did not match her understanding of the internal safety framework. When the court asked whether Altman was telling the truth, she said he was not.

This is where the story becomes more than a courtroom soundbite. Deployment safety boards are not typically treated as optional ceremonial steps; they are meant to function as a structured checkpoint—one that forces an organization to slow down, evaluate risks, and document decisions. If a model can bypass such a board based on a legal determination, then the question becomes: what exactly is the legal determination, what criteria does it rely on, and how does it interact with the technical safety process? Even if legal review is part of safety governance, it is not the same thing as a deployment safety board’s risk assessment. Legal departments often focus on compliance, liability, and regulatory exposure. Safety boards, by contrast, are usually tasked with evaluating model behavior, potential harms, and readiness for release.

So when Murati says she couldn’t trust Altman’s words about how the process worked, she is implicitly raising a broader concern: that the internal safety system may have been represented in a way that blurred distinct responsibilities. If leadership communicates that a safety gate was effectively waived, then teams downstream may interpret that as permission to proceed without the level of scrutiny they expected. That can change timelines, staffing priorities, and the internal pressure surrounding release decisions.

And that leads to another layer of the testimony: Murati’s description of how the situation affected her work. According to the excerpt, she said Altman made her work more difficult during her tenure, and that her criticism was “completely management r…”—the phrasing appears truncated in the excerpt, but the meaning is clear enough. She is describing a dynamic in which her concerns were dismissed or reframed as mere management friction rather than treated as substantive safety or governance issues.

This is a familiar pattern in high-stakes organizations, especially those moving quickly in competitive markets. Technical leaders can become trapped between two competing demands: the urgency to ship and the obligation to ensure safety. When leadership dismisses concerns as “management” problems, it can create a culture where dissent is discouraged—not necessarily through explicit retaliation, but through subtle signals that questions will be treated as obstacles rather than safeguards.

Murati’s testimony, as presented, suggests that she experienced that dismissal firsthand. If she believed that a safety process was being bypassed or misrepresented, then raising concerns would not be a matter of preference. It would be a matter of risk management. In that context, “making her work more difficult” reads less like interpersonal conflict and more like a breakdown in trust between technical oversight and executive communication.

Trust is the invisible infrastructure of governance. Safety systems depend on accurate information flowing upward and downward. If the people responsible for building and evaluating models cannot trust what executives say about the status of safety reviews, then the entire governance structure becomes fragile. Teams may still follow procedures, but they do so under uncertainty: Are we doing the right checks? Are we being told the full story? Are decisions being made behind the scenes and then retroactively justified?

In the AI industry, where models evolve rapidly and deployment decisions can have immediate real-world consequences, that uncertainty is not academic. A model released without the expected safety scrutiny can expose users to harmful outputs, misuse, or unpredictable behavior. Even if the organization believes it has mitigated risks, the difference between “we evaluated it” and “we decided it didn’t need evaluation” is enormous.

Murati’s testimony also highlights a tension that has long existed in AI companies: the relationship between legal and safety governance. Legal review is essential, but it is not a substitute for technical risk assessment. When legal determines that a model does not need to go through a safety board, it raises a structural question: is the legal department making a judgment about safety, or is it making a judgment about process requirements and regulatory obligations? Those are related, but not identical. If the legal department’s determination is being used as a lever to bypass technical safety gates, then the organization’s safety governance becomes vulnerable to interpretation and power dynamics.

The court’s focus on a specific claim—whether Altman’s statement about the legal department and the deployment safety board was truthful—therefore functions as a proxy for a larger issue: how decisions are justified internally. In many organizations, the most consequential disputes are not about whether someone wanted to do the right thing. They are about whether the right steps were followed and whether the rationale for skipping steps was communicated honestly.

That is why Murati’s answer carries weight beyond the immediate facts. It suggests that she viewed Altman’s explanation as inaccurate, and that she did not accept it as a reliable account of the company’s safety process. In a trial setting, that becomes evidence about credibility and intent, but in a governance setting, it becomes evidence about system reliability.

There is also a strategic dimension to how this testimony lands in the broader narrative of Musk v. Altman. The case has been framed publicly around allegations and counter-allegations involving leadership conduct and corporate decisions. But Murati’s testimony shifts the emphasis toward internal governance mechanics. It implies that the dispute is not only about what happened externally—contracts, communications, or business moves—but also about what happened inside the organization when it came to safety oversight.

That shift is important because it changes what readers should look for when evaluating the significance of the testimony. Instead of treating it as a dramatic personal conflict, it invites readers to ask: what does this reveal about how OpenAI managed safety decisions at the time? How were safety boards used? Were they consistently applied? Were exceptions made, and if so, were those exceptions documented and communicated transparently?

Even without knowing every detail of the deposition, the excerpt provides enough to understand the core concern: Murati believed that a key statement about safety process routing was false. That belief, if supported by additional evidence, could indicate that safety governance was not operating with the clarity and consistency expected of a company handling frontier AI systems.

At the same time, it’s worth noting that courtroom testimony is not the same as a final verdict. Depositions are part of an evidentiary process, and the opposing side may challenge interpretations, timelines, or the exact meaning of internal determinations. But the existence of the dispute itself is revealing. When a former CTO says she couldn’t trust a CEO’s account of safety procedures, it suggests a serious breakdown in the alignment between executive messaging and technical governance.

For readers trying to understand why this matters, it helps to translate the courtroom language into governance terms. Imagine a company with a safety board that acts like a formal gate. If leadership claims that legal review eliminated the need for the gate, then the gate’s purpose is undermined. The safety board becomes optional, and the organization’s risk management becomes dependent on who has the authority to declare exceptions. That is not how robust safety governance is supposed to work.

Murati’s testimony therefore resonates with a broader industry debate: whether AI safety is treated as a living process with enforceable checkpoints, or as a set of statements that can be adjusted depending on schedule pressure and executive priorities. In the public imagination, “safety” often sounds like a moral commitment. In practice, it is a set of procedures, documentation, and accountability mechanisms. When those mechanisms are misrepresented, safety becomes performative rather than operational.

The most interesting part of Murati’s testimony, however, may be what it implies about organizational culture. Her description of her criticisms being dismissed as “management” suggests that safety concerns were not merely technical disagreements. They were interpreted through a managerial lens—something to be managed away rather than addressed. That kind of cultural framing can be dangerous in any engineering organization, but it is especially risky in AI, where the consequences of errors can scale quickly and unpredictably.

If technical leaders feel that their concerns will be treated as obstacles, they may either disengage or escalate. Escalation can lead to conflict, but disengagement can lead to silence. Either outcome weakens governance.