The courtroom drama between Elon Musk and Sam Altman is no longer just a clash of personalities or competing narratives about who deserves credit for OpenAI’s early days. As testimony continues this week, the trial is increasingly turning into a fight over governance—over what OpenAI was supposed to be, how it was structured to pursue that purpose, and what legal consequences follow when a mission-oriented organization evolves into something that looks, at least to one side, more like a profit-driven technology company.
At the center of the dispute is Musk’s 2024 lawsuit, filed after he accused OpenAI of abandoning its founding mission: developing advanced AI for the benefit of humanity rather than prioritizing financial returns. Musk’s complaint also frames the shift as something more than ordinary corporate evolution. In his view, OpenAI’s leadership—particularly Altman and Greg Brockman—moved away from the original intent in ways that should trigger legal remedies. Those remedies, according to the proceedings described in reporting, are not modest. Musk has asked for Altman and Brockman to be removed from their roles, for OpenAI to stop operating as a public benefit corporation, and for the nonprofit to receive damages that Musk has demanded could reach up to $150 billion if he prevails.
OpenAI, for its part, has rejected the premise that the lawsuit is about mission drift in any meaningful sense. The company’s position, as characterized in reporting, is that the case is baseless and driven by competitive motives—specifically pointing to Musk’s own AI efforts. That argument matters because it reframes the trial from “what happened to OpenAI’s mission?” into “why is Musk bringing this case now, and what is he trying to accomplish?” In other words, the courtroom is being asked to decide not only what OpenAI did, but also what Musk’s lawsuit is really about.
This week’s testimony has added new pressure to those questions. On Wednesday, May 6, Shivon Zilis—described in reporting as a former OpenAI board member who shares four children with Musk—is taking the stand. Her presence is significant not because it automatically resolves the legal issues, but because it places a person with direct board-level proximity to OpenAI’s internal decision-making in front of the jury during a phase of the trial that is already focused on governance and mission. When a case turns on organizational intent, board dynamics can become more than background context; they can become evidence of how decisions were understood at the time and how accountability was exercised.
Alongside Zilis, reporting indicates that Mira Murati, a former OpenAI executive, is also testifying via video. Video testimony can sometimes feel less immediate than in-person questioning, but it still functions as a formal part of the evidentiary record. In a trial like this—where the parties are litigating the meaning of documents, communications, and structural choices—video testimony can be used to establish timelines, clarify roles, and confirm or deny claims about what leadership knew and when they knew it.
The trial’s witness schedule also underscores how the case is being built. Microsoft CEO Satya Nadella is scheduled to appear on Monday, with Ilya Sutskever listed to testify after that. That sequencing is telling. Nadella’s appearance signals that the court is likely to hear about the relationship between OpenAI and its major strategic partner, and how that partnership intersected with OpenAI’s governance and business model. Sutskever’s testimony, meanwhile, is especially relevant because he is widely associated with OpenAI’s technical direction and early identity. In a lawsuit that repeatedly invokes the idea of mission, a figure like Sutskever can become a focal point for arguments about whether OpenAI’s trajectory represented a betrayal of founding principles or a pragmatic response to the realities of building frontier AI.
What makes this trial unusually consequential is that it is not simply asking the jury to decide whether OpenAI made mistakes. It is asking them to evaluate whether OpenAI’s evolution constitutes a legal wrong under the specific claims Musk is making. That distinction matters. A mission-driven organization can change its methods without necessarily violating a mission. But Musk’s framing suggests something more fundamental: that OpenAI’s structure and priorities shifted in ways that should have been constrained by its original commitments.
That is why the courtroom discussions described in reporting—about governance, about how OpenAI structured itself, and about how the nonprofit/public benefit framework relates to operational decisions—are not side quests. They are the core mechanics of the case. If the jury concludes that OpenAI’s governance changes were consistent with its mission and legally permissible under its structure, Musk’s requests for removal and structural changes face an uphill battle. If the jury concludes the opposite, the implications extend beyond the individuals named in the lawsuit. They could influence how future mission-oriented AI organizations think about accountability, fiduciary duties, and the boundaries between “public benefit” language and real-world incentives.
One of the most striking aspects of the reporting is how often the trial appears to circle back to the same underlying tension: whether OpenAI’s governance was designed to protect a mission first—or whether it was always, in practice, a vehicle for scaling a high-stakes technology business. Musk’s lawsuit argues that the latter is what happened. OpenAI’s defense argues that Musk’s narrative is distorted and that the lawsuit is tied to competition, including Musk’s own AI efforts.
This is where the trial becomes more than a story about OpenAI. It becomes a referendum on how the public should interpret the evolution of AI companies that begin with moral language and later adopt the structures required to compete in a market dominated by massive capital, compute costs, and strategic partnerships. The jury is effectively being asked to decide whether mission statements and governance structures are enforceable constraints—or whether they are flexible branding that can be reinterpreted once the organization grows.
The testimony of people like Zilis and Murati is likely to be used to illuminate that question. Board members and executives are not just witnesses to events; they are witnesses to understanding. When a board member participates in decisions, the legal significance often lies in what those decisions were intended to accomplish and how they were justified internally. When an executive speaks, the focus tends to shift toward what leadership believed the organization was doing, how it communicated those beliefs, and whether it treated mission commitments as operational constraints or as aspirational goals.
In parallel, the trial’s mention of live witness questioning and evidence discussions suggests that the parties are actively contesting the meaning of communications and documents. In cases like this, the “what” and the “why” are inseparable. A document can show that a certain plan existed, but the testimony can show whether it was treated as a temporary strategy, a necessary compromise, or a permanent pivot. Similarly, a governance change can be technically legal while still being argued as morally or missionally inconsistent. The jury’s job is to decide which interpretation is supported by the evidence.
Another layer that emerges from the reporting is the way the trial is framed as a competition story. OpenAI’s characterization of the lawsuit as a baseless bid to derail a competitor—along with references to Musk’s SpaceX/xAI/X ecosystem and Grok as a competing product—does more than defend against the lawsuit. It attempts to undermine Musk’s credibility and motive. If the jury believes Musk’s primary goal is to weaken OpenAI competitively, then even if some mission-related rhetoric changed over time, the legal remedy Musk seeks may appear less justified.
But if the jury believes Musk’s claims reflect a genuine concern about mission abandonment—and that the governance structure failed to protect the nonprofit’s stated purpose—then the competitive motive argument may not carry enough weight to negate the legal claims. That is why the trial’s focus on governance and mission is so central. It forces the jury to evaluate whether the organization’s evolution was a legitimate adaptation or a departure that should have triggered accountability mechanisms.
The scheduling of major figures also hints at how the parties are likely to structure their arguments. Nadella’s testimony could bring the jury closer to the question of whether OpenAI’s partnership with Microsoft influenced its governance and priorities. Even if the lawsuit is not directly about Microsoft, the relationship between a nonprofit-like entity and a major corporate investor can shape incentives and operational realities. The jury may be asked to consider whether OpenAI’s governance was insulated from profit pressures or whether it became entangled with them through strategic dependence.
Sutskever’s testimony, coming after Nadella, could then serve as a bridge between the organization’s technical identity and its governance identity. If the trial is about mission, then the jury will want to understand whether the people most associated with OpenAI’s early vision saw the organization’s later direction as consistent with that vision. If they did, that supports OpenAI’s defense. If they didn’t, it supports Musk’s claim that the mission was compromised.
There is also a subtle but important point about how the trial is being conducted: the reporting indicates that the courtroom is dealing with ongoing disputes about what matters are properly before the jury and how testimony should be summarized or interpreted. That procedural friction is common in high-profile trials, but in this case it reflects the stakes. When the claims involve mission, governance, and organizational intent, the line between relevant and irrelevant evidence can become contested. Each side wants the jury to see the story in a particular way, and each side wants to control which details become part of the narrative.
That is why the trial’s “latest on the trial” updates—covering everything from witness testimony to evidence discussions—should be read as more than day-to-day spectacle. They are incremental steps in building a coherent legal argument. Musk’s side is trying to show that OpenAI’s leadership made decisions that violated the spirit, if not the letter, of the founding mission. OpenAI’s side is trying to show that the lawsuit is motivated by rivalry and that OpenAI’s evolution was consistent with its obligations and practical needs.
If there is a unique take to be drawn from the way this trial is unfolding, it is this: the case is not only about whether OpenAI changed. It is about whether the legal system
