Sam Altman Begins Testimony in Elon Musk OpenAI Federal Jury Trial

Sam Altman’s voice carried through a California federal courtroom as he began testimony in the jury trial brought by Elon Musk—an event that, on its surface, looks like a familiar story about founders and falling-outs, but in practice has become something more like a referendum on how modern AI companies are built, governed, and protected when relationships sour.

Altman, the CEO of OpenAI, is one of the primary defendants in the case alongside OpenAI president Greg Brockman. The dispute is rooted in the early days of OpenAI, when Altman, Brockman, and Musk were all part of the founding orbit and when Musk’s financial involvement—reported as up to $38 million during OpenAI’s early period—helped shape the company’s trajectory. Over time, however, the relationship between Musk and other founders deteriorated. Musk stepped away from OpenAI and later launched xAI, positioning it as a direct competitor in the race to build increasingly capable AI systems.

The courtroom moment matters not just because Altman is a high-profile witness, but because the trial is forcing the parties to translate years of public conflict into legal claims that jurors can evaluate. In other words: the case is not only about what happened, but about what those events mean under the law—what obligations existed, what representations were made, what conduct crossed a line, and what damages, if any, resulted.

From the start, the trial has been framed by the parties’ competing narratives. Musk’s allegations—developed over years of public back-and-forth—have repeatedly suggested that OpenAI’s evolution departed from the original intent and that certain actions by the company or its leadership harmed him. Altman and Brockman, by contrast, have sought to portray the dispute as an attempt to re-litigate business disagreements and governance decisions through the lens of legal wrongdoing. As testimony begins, the jury will be asked to decide which version of the story is credible, consistent, and supported by evidence.

What makes this trial unusually compelling is that it sits at the intersection of three forces that rarely align cleanly: personal relationships among founders, the fast-moving reality of AI product development, and the legal system’s demand for clear, provable facts. In the early days of OpenAI, the company’s mission and structure were still taking shape. The technology was advancing quickly, but the institutional rules—how decisions would be made, who had authority, what commitments were binding—were still being formed. That gap between ambition and formalization is often where disputes begin, especially when the stakes later become enormous.

Altman’s testimony is likely to focus on the mechanics of that evolution: how OpenAI operated, how leadership decisions were made, and how the company’s direction changed as it grew. For jurors, the challenge will be to understand not only the timeline, but the logic behind decisions that may look obvious in hindsight. AI companies don’t develop in a straight line. They pivot. They restructure. They change priorities as new capabilities emerge and as regulatory and competitive pressures intensify. But in court, pivots can be portrayed either as responsible adaptation or as betrayal of earlier commitments—depending on what the evidence shows and what the parties promised at the time.

A key element of the case is the role of Musk’s early investment. Reportedly up to $38 million, that figure is more than a number; it represents leverage, influence, and expectations. When someone invests heavily in a startup, they often assume they are buying into a vision—not just funding a product. Yet startups also evolve beyond their earliest plans. The legal question becomes whether Musk’s expectations were grounded in enforceable agreements or whether they were more akin to founder-level hopes that later proved incompatible with the company’s practical needs.

Altman and Brockman are also likely to address the nature of Musk’s departure. Musk’s exit from OpenAI is widely known, and his subsequent creation of xAI is part of the broader public narrative. But the trial will require more than acknowledging that Musk left and started a competitor. It will require the parties to explain what was happening internally at the time, what communications occurred, and whether any wrongdoing occurred in connection with the transition. Jurors will be asked to separate the emotional storyline—two prominent figures who fell out—from the legal storyline: whether any actionable harm was caused by specific conduct.

There is another layer that makes this trial feel different from typical corporate disputes: the subject matter itself. AI is not just a business category; it is a domain where secrecy, speed, and strategic advantage are central. Companies often guard technical details, model training approaches, and deployment strategies. That instinct to protect information can collide with transparency expectations among stakeholders. In a founder dispute, the question becomes: what information was owed, what was withheld, and whether withholding was justified by legitimate business reasons or instead reflected improper motives.

Altman’s testimony will therefore likely be scrutinized for how he describes decision-making processes. When a company grows, it builds committees, governance structures, and internal controls. Those structures can change over time. If Musk believed he had a continuing role or influence, the defense will likely argue that governance evolved in ways consistent with standard corporate practice and with the company’s evolving needs. The plaintiff, meanwhile, will likely argue that the evolution represented a departure from earlier commitments or a misuse of resources tied to Musk’s involvement.

The courtroom setting also highlights how public conflict becomes evidence. Musk and Altman have traded barbs for years, and the internet has treated their feud as entertainment as much as news. But jurors are not deciding who “won” a Twitter argument. They are deciding what happened in real time and whether the law supports Musk’s claims. Still, public statements can become relevant if they reflect admissions, contradictions, or contemporaneous beliefs. Altman’s testimony will likely be compared against prior statements—some made in interviews, some in filings, and some in communications that may have been introduced as exhibits.

One of the most interesting aspects of this case is that it forces the jury to confront a question that many people outside the tech world rarely consider: what does it mean to found a company with a mission, and then later operate it as a large-scale enterprise? OpenAI’s growth has been accompanied by increasing commercialization pressures, partnerships, and infrastructure demands. Those realities can strain the original “founder idealism” that often motivates early supporters. In court, that tension can become a proxy for a deeper dispute: whether the company’s transformation was inevitable and lawful, or whether it was a betrayal of a specific bargain.

Altman’s testimony may also touch on the broader ecosystem around OpenAI—investors, regulators, and competitors. While the trial is not a referendum on the entire AI industry, jurors will inevitably hear how external pressures influenced internal choices. The defense will likely emphasize that OpenAI’s evolution was driven by the need to build safe, scalable systems and to compete in a rapidly changing market. The plaintiff will likely argue that those explanations do not erase earlier obligations or justify alleged misconduct.

If the trial proceeds in a way typical of high-stakes civil cases, Altman’s testimony will be followed by cross-examination designed to test credibility and consistency. Cross-examination often targets three things: timelines, specificity, and motive. Did Altman describe events with enough detail to be reliable? Do his accounts align with documents? And do his explanations suggest a pattern consistent with the defense’s theory—or do they leave gaps that support the plaintiff’s narrative?

For jurors, the most persuasive testimony is usually the kind that is anchored to concrete facts: dates, meetings, written communications, and specific decisions. Vague recollections can be damaging, especially when the opposing side has documents that contradict or complicate the witness’s memory. Altman’s legal team will likely aim to present a coherent story that matches the documentary record. Musk’s team will likely aim to show that the story is incomplete or self-serving.

There is also the question of damages—what Musk claims he lost, and how those losses should be measured. Even if the jury finds that something improper occurred, the case still hinges on whether Musk can prove harm in a legally meaningful way. In founder disputes, damages can be difficult because the company’s success depends on many variables: market conditions, technological breakthroughs, and the actions of multiple stakeholders. The defense may argue that any alleged harm is speculative or that Musk’s own decisions—such as leaving and starting xAI—break the causal chain.

That causal chain is likely to be a focal point. Musk’s departure and the creation of xAI are not just background facts; they are potential defenses against claims of harm. If Musk chose to compete, the defense may argue, then any resulting competitive disadvantage is not attributable to wrongdoing by OpenAI leadership. Conversely, Musk may argue that the competitive threat was enabled or accelerated by conduct that occurred while he was still involved or based on information or opportunities tied to his investment.

Altman’s testimony, therefore, is not only about what OpenAI did—it is about why it did it, and whether the plaintiff’s theory of causation holds up. Jurors will be asked to connect dots across years, and the attorneys will likely spend significant time guiding them through that process.

Beyond the legal mechanics, there is a human dimension that makes the trial resonate. Altman and Musk were once aligned in a shared mission. Their relationship soured, and the public watched as both men became symbols of competing visions for AI’s future. In court, those symbols become witnesses and arguments. The jury will not be deciding which vision is better. But they will be deciding whether the plaintiff’s claims are supported by evidence and whether the defendant’s conduct meets the legal standards alleged.

This is where the trial’s unique take emerges: it is not simply a fight between two tech titans; it is a test of how founder-era promises survive the transition into institutional power. Early-stage companies often run on informal understandings, personal trust, and mission-driven urgency. Later, those companies become complex organizations with formal governance and strategic constraints. When disputes arise, the legal system must translate informal founder dynamics