The courtroom drama between Elon Musk and Sam Altman is no longer just a story about two famous tech figures disagreeing in public. It has become a referendum—at least in the eyes of the people watching closely—on what OpenAI was supposed to be, what it became, and whether the law can meaningfully adjudicate “mission drift” in a fast-moving industry where business models evolve as quickly as products.
The trial is underway after jury selection began on April 27, and on this day opening arguments moved forward with Musk appearing as testimony begins. The case is framed as a fight over OpenAI’s direction: Musk argues that the company abandoned its founding purpose of developing advanced AI for the benefit of humanity and instead shifted toward profit-seeking priorities. OpenAI, by contrast, insists that Musk’s claims are baseless and that the lawsuit is less about governance and more about derailing a competitor—particularly one that now sits at the center of the modern AI boom.
At the heart of the dispute is a question that sounds philosophical but is being litigated in practical terms: when an organization changes how it operates, who gets to decide whether that change is legitimate? And if founders disagree about the meaning of “the mission,” what evidence matters most—internal communications, board decisions, or the public narrative that followed?
Musk’s position: governance, control, and the meaning of the mission
Musk’s legal theory is rooted in his status as a cofounder of OpenAI and his claim that he was effectively maneuvered out of influence. According to the allegations described in coverage leading into the trial, Musk contends that Altman and cofounder Greg Brockman tricked him into providing money to the organization, only for the leadership to later turn away from the original goal. In other words, the lawsuit is not merely about whether OpenAI pursued commercial success; it is about whether the pursuit of commercial success came at the expense of a promise that Musk believes was foundational.
That framing matters because it shapes what Musk is asking the court to do. He is not simply seeking damages for past conduct. He is also seeking structural remedies—requests aimed at changing how OpenAI operates. Among the demands reported in connection with the case are efforts to remove Altman and Brockman from their roles and to stop OpenAI from operating as a public benefit corporation. Musk has also demanded substantial damages, with reporting indicating he is seeking up to $150 billion for the nonprofit entity if he prevails.
Those requests are sweeping enough that they force the trial to confront a difficult legal reality: courts are often cautious about ordering remedies that would dramatically reshape corporate governance, especially in organizations that have grown into complex, multi-layered entities. But Musk’s strategy appears to be to make the governance question unavoidable. If the jury accepts that OpenAI’s leadership departed from the mission in a legally significant way, then the requested remedies become part of the argument rather than an afterthought.
In the courtroom, Musk’s own testimony is being used to establish context and credibility. Reports from the trial indicate that Musk told the jury he (co)founded Tesla, a detail that may seem tangential to OpenAI at first glance, but in trials like this it serves a familiar function: it situates the witness as someone with deep experience in building major technology companies and frames his perspective as that of an operator, not merely an outsider with grievances.
There is also a broader rhetorical theme emerging from the early proceedings: Musk is positioning himself as someone who cared about the mission enough to take risks and invest, and who feels betrayed by the outcome. That emotional narrative is not automatically persuasive in court, but it can influence how jurors interpret the evidence—especially when the evidence itself involves contested interpretations of intent and communications.
OpenAI’s response: mission drift as a pretext for competition
OpenAI’s counter-narrative is blunt. The company says the lawsuit has always been a baseless attempt to derail a competitor. In statements attributed to OpenAI, the case is characterized as jealous and opportunistic—an effort to undermine OpenAI while boosting Musk’s own ecosystem of companies.
This is where the trial becomes more than a dispute about corporate ideals. OpenAI’s argument suggests that Musk’s motivations are not primarily about protecting a mission, but about protecting market position. Coverage indicates that OpenAI points to Musk’s SpaceX/xAI/X interests and to the fact that xAI launched Grok as a competitor to ChatGPT. The implication is that the lawsuit functions as a strategic weapon: if OpenAI can be forced into governance changes or constrained in its operations, then the competitive landscape shifts.
OpenAI’s lawyers also appear to be challenging the idea that Musk’s involvement should be treated as a moral trump card. Instead, they are likely to argue that Musk was involved in discussions about a for-profit pivot and that his claims about mission abandonment ignore the complexity of how OpenAI evolved. Reporting indicates that OpenAI lawyers argue that Musk was right in the middle of discussions about a for-profit pivot—an assertion that, if supported by evidence, could undercut the notion that Musk was blindsided by a sudden betrayal.
This is a key point for jurors: mission language can be broad and aspirational, but governance decisions are concrete. If the evidence shows that Musk participated in or agreed to the kinds of structural changes that enabled scaling, then the “abandonment” story becomes harder to sustain. Conversely, if the evidence shows that Musk was excluded from decision-making or misled about the direction, then OpenAI’s competitive motive argument may not carry enough weight to neutralize the governance claims.
What makes this trial unusual is that both sides are trying to win on intent. Musk’s side wants the jury to believe that leadership intentionally shifted away from the mission and did so in a way that harmed the nonprofit’s purpose. OpenAI’s side wants the jury to believe that Musk’s intent is to disrupt a rival and that the legal claims are a vehicle for that disruption.
The jury selection and early proceedings: credibility, bias, and the theater of tech fame
Even before opening arguments, the trial has been shaped by the realities of selecting jurors in a case involving globally recognized personalities. Reporting around jury selection included references to jurors’ attitudes toward Musk, including attempts by Musk’s lawyer to exclude some jurors for disliking him. That detail matters because it highlights how juror perception—whether conscious or not—can become a factor in high-profile cases.
In a dispute like this, jurors must separate the person from the claim. They are asked to evaluate evidence about governance and mission, not to decide whether Musk is likable or whether Altman is admirable. Yet in practice, jurors are human. The defense and prosecution strategies often include efforts to manage the jury’s baseline assumptions.
The early courtroom moments described in coverage—such as Musk being the first witness—also signal that the trial is moving quickly into the question of what Musk personally believed and what he personally did. That approach can be effective if it anchors the case in firsthand knowledge. But it also carries risk: if jurors perceive the testimony as self-serving or overly broad, it can weaken the credibility of the overall narrative.
A unique angle: the trial as a test of how “mission” survives scale
One reason this case has captured attention beyond the usual legal audience is that it touches a problem many mission-driven tech organizations face: what happens when the mission requires resources that only a profit-oriented structure can reliably provide?
OpenAI’s evolution—from a nonprofit-rooted vision to a system that includes for-profit elements and partnerships—has been widely discussed in the industry. But the trial forces those discussions into a legal framework. It asks whether the mission was truly abandoned, or whether it was adapted to survive.
This is where the trial’s stakes feel bigger than the personalities. If the jury concludes that mission drift is actionable in court, then future AI organizations may face heightened scrutiny over how they structure incentives, allocate profits, and justify governance changes. If the jury concludes that the mission was adapted legitimately—or that Musk’s claims are too speculative—then the case may reinforce the idea that mission language is not enforceable in the way plaintiffs hope, especially when organizations evolve under competitive pressure.
In other words, the trial could influence how founders draft mission statements and how boards document decisions. It could also influence how investors and partners interpret governance commitments. Even if the verdict is narrow, the reasoning could become a reference point for future disputes.
The “public benefit corporation” issue: why corporate form matters
Musk’s request to stop OpenAI from operating as a public benefit corporation is not a mere technicality. Corporate form determines obligations, reporting requirements, and how decision-makers are expected to balance competing goals. A public benefit corporation is designed to embed additional purposes beyond shareholder value. If Musk argues that OpenAI’s actions violated those purposes, then the corporate form becomes central to the legal theory.
But OpenAI’s response suggests that the lawsuit is not really about corporate form—it’s about leverage. If OpenAI can convince the jury that Musk’s claims are pretextual, then even a strong argument about corporate structure may not matter as much.
This is why the trial’s evidence will likely focus on internal governance documents, board minutes, communications, and the timeline of decisions. The question is not simply whether OpenAI made money. The question is whether the organization’s leadership treated the mission as subordinate to profit in a way that violates the legal duties associated with its structure.
The damages demand: what $150 billion signals about the case
Musk’s reported demand for up to $150 billion in damages is striking, and it signals something about the posture of the lawsuit. Large damages requests can be interpreted as a negotiating tactic, but they can also reflect a belief that the harm is systemic and measurable.
In mission-related disputes, damages are often difficult to quantify. Plaintiffs may argue that the nonprofit’s purpose was undermined and that the organization’s trajectory caused measurable losses. Defendants typically argue that the claimed harm is speculative, that causation is unclear, and that the requested amount is disproportionate
