Microsoft CEO Satya Nadella has told a US court that the attempt to remove OpenAI chief executive Sam Altman in 2023 was handled with the kind of improvisation he associates with “amateur city” — a phrase that, in testimony tied to Elon Musk’s ongoing lawsuit against OpenAI, underscored how badly the internal leadership crisis was managed and why Microsoft ultimately chose to back Altman rather than the effort to oust him.
The testimony, delivered as part of Musk’s legal challenge, adds another layer to a dispute that has already become one of the most closely watched governance sagas in modern tech. It also reframes the episode not simply as a power struggle inside a fast-moving AI lab, but as a stress test for how major partners interpret risk, legitimacy, and continuity when the stakes are existential and the timeline is compressed.
At the center of Nadella’s account is the question of what Microsoft believed was happening in real time in late 2023, when OpenAI’s board moved to remove Altman and the company’s leadership situation rapidly destabilized. According to Nadella, Microsoft’s decision to support Altman was not a reflexive alignment with one personality over another. Instead, it was a response to what he characterized as a poorly executed attempt to change leadership—one that, in his view, lacked the preparation and seriousness expected from an organization operating at OpenAI’s scale.
That distinction matters because it shifts the narrative from “who won” to “how the process was conducted.” In corporate governance terms, the manner of decision-making can be as consequential as the decision itself. And in the AI industry, where partnerships, compute access, product roadmaps, and regulatory scrutiny all move on tight schedules, the credibility of leadership transitions can determine whether a company retains momentum or fractures under uncertainty.
Nadella’s “amateur city” description was aimed at the mechanics of the attempted removal. While the details of the internal deliberations remain contested in court filings and testimony, the thrust of his remarks was clear: the effort to replace Altman appeared to be executed without the level of planning, communication, and operational discipline that Microsoft would expect from a board making a decision with immediate implications for employees, investors, and strategic partners.
In other words, Nadella suggested that the board’s approach did not match the gravity of the moment. OpenAI was not a typical startup with a small leadership footprint and a flexible operating rhythm. It had become a central node in a global technology ecosystem, with Microsoft as a key partner and with customers and regulators watching closely. A leadership shake-up under those conditions is not merely internal; it reverberates across contracts, staffing decisions, and the confidence of people who depend on continuity.
Microsoft’s backing of Altman, as Nadella explained, was therefore tied to a judgment about organizational competence and stability. If the process looked improvised, Microsoft could reasonably conclude that the company’s future direction might be compromised by governance dysfunction. That is a different rationale than simply preferring one executive’s vision. It is closer to a risk-management calculation: if the leadership transition is chaotic, the organization’s ability to execute may be impaired, and the partner’s own commitments could be jeopardized.
This is where the lawsuit becomes more than a dispute about personalities. Musk’s case has long argued that OpenAI’s governance structure and decision-making have been inconsistent with the promises made to the public and to stakeholders. The litigation also seeks to establish that certain actions were taken in ways that harmed Musk’s interests and, more broadly, undermined the integrity of OpenAI’s mission and oversight.
Nadella’s testimony, while focused on Microsoft’s perspective, effectively supplies evidence about how the crisis was perceived externally. When a major partner describes an internal coup attempt as amateurish, it signals that the board’s actions were not only controversial but also operationally alarming. That perception can influence how courts evaluate claims about intent, process, and the reasonableness of actions taken by third parties.
There is also a strategic dimension to Nadella’s account. Microsoft’s relationship with OpenAI is not limited to a single contract; it is embedded in the infrastructure and product development pipeline that powers some of the most prominent AI systems in the market. In such a relationship, leadership continuity is not a matter of corporate etiquette. It affects hiring, engineering priorities, and the ability to coordinate on research and deployment timelines.
When leadership becomes uncertain, partners face a dilemma: wait and see, or take a position that protects their own operational commitments. Nadella’s testimony suggests Microsoft chose the latter, but with a justification rooted in the perceived quality of the board’s actions. That framing implies Microsoft did not treat the crisis as a normal internal disagreement. It treated it as a governance failure with immediate consequences.
The court context also matters. Musk’s lawsuit is ongoing, and testimony is gradually building a picture of how decisions were made, who influenced them, and what each party believed at the time. Nadella’s remarks are likely to be used to support arguments about the nature of the leadership dispute and the legitimacy of the processes involved. They may also be used to counter narratives that portray the board’s actions as orderly and justified.
At the same time, Nadella’s testimony does not automatically settle the underlying factual disputes about what exactly happened inside OpenAI. Courts still need to weigh competing accounts, documentary evidence, and the credibility of witnesses. But the “amateur city” characterization is powerful because it is specific in tone and because it comes from the CEO of a company that had both leverage and responsibility in the partnership.
It also highlights a recurring theme in the AI sector: governance is often treated as secondary to innovation until the moment it fails. OpenAI’s crisis showed what happens when governance mechanisms do not keep pace with the speed of technological and commercial change. Boards and executives can disagree about strategy, but when the disagreement escalates into abrupt leadership moves, the organization’s internal cohesion can collapse quickly. That collapse then forces external stakeholders to make rapid decisions, sometimes under uncertainty and incomplete information.
Nadella’s testimony suggests Microsoft interpreted the board’s attempt to remove Altman as a breakdown in that governance discipline. The implication is that the board underestimated how quickly the situation would spread beyond the company and how much it would affect partner confidence. In high-stakes ecosystems, legitimacy is not just a legal concept; it is a practical one. People need to believe that leadership changes are being handled responsibly, or they will hedge, delay, or exit.
The lawsuit also brings into focus the tension between technology and governance. AI companies operate in a space where competitive urgency is intense and where the public expects rapid progress. Yet governance structures—especially those involving boards, oversight committees, and mission constraints—are designed for slower, more deliberate decision-making. When those two tempos collide, the result can be instability.
Nadella’s remarks can be read as an argument that OpenAI’s board actions did not respect the tempo required by the organization’s reality. If the board’s plan relied on assumptions that did not hold—such as that employees and partners would accept the change without disruption—then the plan was flawed. And if the plan was flawed, Microsoft’s response becomes easier to justify.
There is also a subtle but important point about how Nadella described Microsoft’s decision. He did not present it as a purely ideological choice about Altman’s leadership style. Instead, he framed it as a response to the quality of the process. That distinction matters because it suggests Microsoft’s support was conditional on governance competence, not personal loyalty.
This is a unique take on the episode because it treats the crisis less like a drama of competing visions and more like a case study in organizational execution. In that lens, the “coup attempt” language used by some observers becomes less about the word “coup” and more about the operational characteristics of the attempt: timing, preparation, communication, and the ability to maintain continuity.
For readers trying to understand why this testimony is significant, it helps to consider what is at stake in the courtroom. Musk’s lawsuit is not only about what happened; it is about what those actions mean legally and ethically. Witness testimony can influence how a judge interprets intent and reasonableness. When a witness like Nadella describes a leadership removal attempt as amateurish, it can shape the court’s understanding of whether the actions were taken with due care.
It can also shape how the public understands the episode. Many accounts of the 2023 crisis have focused on the dramatic sequence of events: the board’s move, the backlash, the rapid negotiations, and Altman’s eventual return. Nadella’s testimony shifts attention to the underlying governance behavior that preceded the drama. It suggests that the drama was not inevitable; it was triggered by a process that failed to meet the standards expected by a sophisticated partner.
As the case continues, more testimony is expected to address decision-making, influence, and the relationships among key players in the AI ecosystem. That includes questions about how much control different actors had, what communications occurred, and how each party assessed the risks of supporting one outcome over another. Nadella’s testimony is likely to be cited alongside other evidence to build a timeline of perceptions and actions.
For Microsoft, the stakes are also reputational. Being seen as a partner that intervened in a leadership crisis can raise questions about influence and independence. Nadella’s framing—supporting Altman because the removal attempt was mishandled—helps Microsoft present its involvement as a rational response to governance failure rather than an opportunistic takeover.
For OpenAI, the testimony adds pressure to explain how its board decisions were made and whether the process met the expectations of stakeholders. Even if the board believed it was acting in the company’s best interest, Nadella’s account suggests that the execution fell short. That gap between intention and execution is often where governance disputes become legally combustible.
For Musk, the testimony provides additional material to argue that OpenAI’s governance has been unstable and that key decisions were made in ways that did not align with the organization’s stated mission and oversight commitments. Musk’s lawsuit has repeatedly emphasized that governance failures can have real-world
