Elon Musk vs Sam Altman Trial Highlights Key Testimony on OpenAI’s Mission and Nonprofit Future

The courtroom drama between Elon Musk and Sam Altman is no longer just about who said what during OpenAI’s early years. As the trial moves through witness testimony, it’s increasingly about something more structural: whether OpenAI’s governance choices—its board decisions, its nonprofit/public-benefit framing, and its relationship with major backers like Microsoft—were faithful to the organization’s founding mission or whether they were a pivot driven by incentives that inevitably pull powerful institutions toward profit.

For readers trying to follow along, the most important thing to understand is that this case is not being argued as a simple “good intentions versus bad behavior” story. Musk’s lawsuit is built around specific claims: that OpenAI abandoned its original purpose of developing advanced AI for the benefit of humanity, and that leadership decisions shifted the organization toward profit-oriented outcomes. OpenAI, in turn, argues that the lawsuit is baseless and motivated by competitive rivalry—an attempt, in their telling, to derail a competitor while boosting Musk’s own AI ambitions.

Right now, the trial’s evidentiary focus is landing on the kind of testimony that can make or break credibility: board-level perspectives, internal decision-making, and recollections of how key moments unfolded. On Wednesday, May 6, Shivon Zilis—described in the reporting as a former OpenAI board member who shares four children with Musk—took the stand. Her testimony matters not only because of her proximity to OpenAI’s governance, but because board members are often the people who can explain how decisions were justified at the time, what information was available, and what concerns were raised before major public milestones.

Alongside Zilis, the court has also been hearing from former OpenAI executive Mira Murati via video deposition. Video testimony can feel distant to jurors compared with live witnesses, but it often carries weight because it preserves the exact phrasing of earlier sworn statements. In cases like this, the precision of language becomes part of the evidence itself: what a witness says they believed, what they say they trusted, and what they say they feared could happen if certain governance choices were made.

The trial’s schedule underscores how the parties are trying to build a narrative arc. Microsoft CEO Satya Nadella is scheduled to appear on Monday, and OpenAI cofounder and former chief scientist Ilya Sutskever is lined up to testify after that. Those are not minor names in an OpenAI dispute. They represent the external ecosystem that shaped OpenAI’s trajectory—especially the Microsoft relationship, which has long been described as both a strategic partnership and a source of influence. Whether the court treats that influence as a necessary funding mechanism or as a driver of mission drift is likely to be one of the central interpretive questions.

Musk’s position, as presented in the case, is straightforward in its moral framing but complex in its legal demands. He argues that OpenAI’s leadership—particularly Altman and Greg Brockman—tricked him into providing money and then turned away from the founding goal. He is asking the court to remove Altman and Brockman from their roles, to stop OpenAI from operating as a public benefit corporation, and to award damages that he has demanded could reach up to $150 billion if he prevails. That last figure is especially notable because it signals that Musk is not merely seeking symbolic relief; he is seeking a remedy that would force a reckoning with the alleged consequences of governance decisions.

OpenAI’s response is equally pointed, though it takes a different tone. OpenAI has characterized the lawsuit as a baseless and jealous bid to derail a competitor, and it frames Musk’s involvement as tied to his own companies—SpaceX, xAI, and X—where Grok is positioned as a competing product to ChatGPT. In other words, OpenAI is trying to persuade the jury that the motive behind the lawsuit is not the protection of a nonprofit mission, but competitive disruption.

This is where the trial becomes more than a contest of facts; it becomes a contest of interpretation. Jurors are asked to decide not only what happened, but why it happened, and whether the “why” matters legally. When a witness testifies about board discussions, donation reassurances, or internal concerns about control, the jury is effectively being asked to map those details onto the legal theory: did leadership abandon the mission, or did it adapt the structure to survive and scale?

One of the most revealing aspects of the testimony so far is the way the case keeps returning to governance and control. In disputes over organizations that claim a mission, control is rarely a side issue. It determines who can steer strategy, who can approve major pivots, and who can block changes that might compromise the original purpose. The reporting indicates that earlier testimony already touched on the “big sticking point” for Brockman and Sutskever being control, and that discussions included whether Musk should be removed from the board. That theme—control—also aligns with the broader question of whether OpenAI’s nonprofit/public-benefit structure was preserved in spirit or hollowed out in practice.

Zilis’s testimony is likely to be scrutinized for exactly that kind of detail. Board members can often clarify whether concerns were raised early, whether dissent was documented, and whether leadership acted transparently. The reporting around her appearance suggests that her testimony includes references to scenarios and concerns about OpenAI’s direction, including issues related to how the board was informed about major developments. In a trial like this, the difference between “we didn’t know” and “we knew but chose to proceed anyway” can be decisive.

There is also a personal dimension that the court cannot ignore, even if it is not the legal center of gravity. Zilis’s relationship to Musk—sharing multiple children—means her testimony will inevitably be interpreted through a lens of loyalty, conflict, and credibility. But the jury’s job is not to judge personal relationships; it is to evaluate whether the testimony is consistent, plausible, and supported by the record. Still, the presence of such a witness can intensify the stakes because it makes the narrative feel less abstract. It turns the case into a story about people, not just institutions.

Mira Murati’s video deposition adds another layer. Murati is widely associated with OpenAI’s leadership during critical periods, and the reporting indicates she told the court she couldn’t trust Sam Altman’s words, and that problems persisted after Altman returned to the company. If those points are emphasized again during the trial, they can serve as evidence that internal trust and governance stability were strained. That matters because mission drift claims often rely on showing that leadership decisions were not merely pragmatic but were accompanied by a breakdown in accountability or a shift in priorities.

At the same time, OpenAI’s defense is likely to push back on the idea that internal disagreements automatically prove mission abandonment. Organizations evolve. Funding structures change. Partnerships become necessary. Even if leadership decisions were controversial, the question remains whether they were inconsistent with the founding mission in a legally meaningful way. This is why the trial’s witness list includes both internal figures and external power centers like Microsoft.

Satya Nadella’s scheduled testimony is particularly significant because it brings the question of influence into sharper focus. Microsoft is not just a funder; it is a strategic partner whose involvement has shaped OpenAI’s ability to build and deploy models at scale. Musk’s argument implies that such involvement contributed to a shift away from mission purity. OpenAI’s argument implies that such involvement was a practical necessity and that the lawsuit is an attempt to rewrite history through selective outrage.

When Nadella testifies, jurors will likely be looking for how Microsoft viewed OpenAI’s mission and how it understood the relationship between nonprofit framing and commercial execution. Did Microsoft see OpenAI as a mission-driven entity that could still pursue large-scale deployment? Or did Microsoft’s involvement create incentives that pulled OpenAI toward profit in ways that undermined its original commitments? The answers may not be simple, but the way Nadella describes the partnership—its goals, its constraints, and its expectations—could help the jury decide whether the alleged mission drift was inevitable or avoidable.

After Nadella, Ilya Sutskever’s testimony is expected to carry enormous weight. As a cofounder and former chief scientist, Sutskever occupies a unique position: he is both a technical authority and a governance-adjacent figure. In many organizational disputes, founders and top scientists are treated as the people most likely to understand the original intent. If Sutskever’s testimony supports Musk’s claim that the mission was compromised, it could strengthen the plaintiff’s narrative. If it supports OpenAI’s claim that the mission remained intact while the structure evolved, it could weaken Musk’s case.

The trial’s ongoing updates also suggest that the court has been dealing with a wide range of evidence, including depositions, internal communications, and testimony about major events like Altman’s ouster and return. Those events are not just corporate gossip in this context; they are governance turning points. Musk’s lawsuit asks the jury to view those turning points as evidence of a broader pattern: that leadership decisions were driven by incentives and control rather than mission fidelity.

OpenAI’s counter-narrative, meanwhile, frames Musk’s lawsuit as a competitive maneuver. That framing is not merely rhetorical. It is designed to influence how jurors interpret the plaintiff’s motives and the credibility of the claims. If the jury believes Musk’s primary motivation is to undermine a competitor, they may be less willing to accept his interpretation of internal governance decisions as proof of mission abandonment. Conversely, if the jury believes Musk’s claims are grounded in credible evidence and consistent with the record, the competitive motive argument may not matter as much.

This is why the trial’s “small” details can become “big” evidence. For example, the reporting indicates that Zilis’s past emails referenced a potential conversion to for-profit for OpenAI, and that she had major concerns about the board not being notified in advance of ChatGPT’s release. Those details can be read in multiple ways. One reading is that they show mission drift was contemplated and perhaps even