Microsoft Feared OpenAI Could Switch to Amazon and Undermine Azure, Court Docs Reveal

Court documents emerging from the ongoing Musk v. Altman trial are offering a rare, behind-the-scenes look at how Microsoft executives thought about their early relationship with OpenAI—and, crucially, what they feared could go wrong.

Long before today’s familiar story of OpenAI’s models powering products across the tech industry, Microsoft and OpenAI were still in the messy, formative phase of turning an ambitious research partnership into something that looked like a durable business. And according to the filing, Microsoft leadership wasn’t only focused on whether OpenAI would deliver breakthroughs. They were also worried about something far more practical: that OpenAI might “storm off to Amazon” and then “shit-talk” Microsoft—essentially using Microsoft’s investment and support as leverage while shifting compute and cloud dependence elsewhere.

The language is blunt, but the underlying concern is one that has shaped nearly every major AI infrastructure deal in the last decade: who controls the pipeline from research to deployment, and what happens when the relationship stops being symmetrical?

A partnership still being negotiated—while OpenAI experimented publicly

The communications described in the court record trace back to the early days of Microsoft’s involvement with OpenAI, a period when the company was not yet the dominant force in consumer-facing AI that it is now. At the time, OpenAI was experimenting with AI systems in ways that were visible to the public, including gaming-related work. One detail highlighted in the reporting context is the summer of 2017, when OpenAI demonstrated an AI-powered bot that beat a professional Dota 2 player. That kind of milestone mattered because it signaled capability—not just in abstract benchmarks, but in real-time decision-making environments where performance is hard to fake.

In other words, OpenAI’s early public experiments helped create momentum. But momentum doesn’t automatically translate into a stable partnership. When you’re funding cutting-edge research, you’re also funding uncertainty: timelines can slip, priorities can change, and the “next phase” of research can require different kinds of compute, different kinds of infrastructure, and different kinds of strategic alignment.

That’s where Microsoft’s internal thinking comes in. The court documents show Microsoft executives discussing investing in OpenAI while simultaneously worrying about the direction the relationship might take. The fear wasn’t simply that OpenAI would leave. It was that OpenAI could leave in a way that actively undermined Microsoft’s position—by shifting workloads to another cloud provider and then publicly or privately disparaging Microsoft’s role.

Why “storm off to Amazon” was more than a throwaway line

The phrase “storm off to Amazon” points to a very specific structural reality of the AI era: cloud providers compete fiercely for the right to host the training and inference workloads that define modern AI businesses. In the early days of the Microsoft–OpenAI relationship, Amazon Web Services was already a major player in cloud infrastructure, and it had the scale, tooling, and distribution to become a serious alternative.

If OpenAI’s next phase required more compute than Microsoft could provide—or if Microsoft’s terms became less favorable—then Amazon could become the obvious fallback. But the filing suggests Microsoft wasn’t only concerned about losing access to OpenAI’s workloads. It was concerned about losing narrative control too.

“Shit-talk” Azure, in this context, reads like a fear that OpenAI wouldn’t just switch providers quietly. Instead, Microsoft worried that OpenAI might frame the move as a critique of Microsoft’s platform, strategy, or responsiveness. That matters because AI partnerships aren’t only technical—they’re reputational. If a startup with outsized influence starts telling the world that one cloud partner is holding it back, that can ripple outward: other customers, investors, and enterprise buyers may interpret it as a signal about reliability, cost, or performance.

This is the part many people miss when they talk about AI infrastructure deals as if they’re purely contractual. In practice, these relationships are also marketing ecosystems. The cloud provider that hosts the most visible models often gets credit for enabling them. The provider that loses the workload can lose more than revenue—it can lose credibility.

Microsoft’s leadership discussions show the partnership was never “set and forget”

One of the most revealing aspects of the court record is how early the strategic concerns appear. The communications described in the filing are tied to the period when Microsoft and OpenAI were still building the partnership framework, not merely executing it. That means Microsoft’s leadership was thinking about long-term dependency and leverage while the relationship was still being formed.

This is a subtle but important point: Microsoft’s worry wasn’t just “Will OpenAI succeed?” It was “What will happen to our position if OpenAI succeeds and gains options?”

When a company is small, it has fewer choices. When it grows, it gains bargaining power. In AI, that bargaining power often translates into compute requirements, model hosting needs, and the ability to negotiate better terms. If Microsoft’s investment is seen as essential early on, Microsoft may assume it earns loyalty later. But loyalty in tech partnerships is rarely guaranteed; it’s usually maintained through a combination of performance, pricing, speed of iteration, and the absence of friction.

The court documents, as summarized in the reporting, suggest Microsoft leadership understood that OpenAI could gain leverage quickly. And once leverage exists, the relationship can shift from collaboration to competition—especially if another provider offers a better deal or a faster path to scaling.

Altman’s response and the push for a bigger partnership

The reporting context also notes that shortly after OpenAI showed its Dota 2 milestone in 2017, Sam Altman responded to Satya Nadella’s congratulations email with a proposal for a much bigger partnership. That detail matters because it shows the dynamic between the two sides: Microsoft leadership congratulates OpenAI, and OpenAI leadership immediately pushes toward expanding the scope of the relationship.

From Microsoft’s perspective, that expansion likely looked like validation. From OpenAI’s perspective, it likely looked like an opportunity to secure the resources needed for the next phase of research. But expansion also increases dependency. The more Microsoft invests, the more it becomes entangled in OpenAI’s roadmap. That entanglement can be beneficial—if the partnership deepens smoothly—but it can also create risk if the relationship later fractures.

So the same moment that signals growth can also intensify the stakes. If OpenAI is asking for a larger commitment, Microsoft has to decide whether it’s buying long-term alignment or simply funding a future negotiation.

The unique tension: compute is both a tool and a bargaining chip

AI partnerships often get described as “who has the best models” or “who has the best researchers.” But the court documents highlight a different axis: compute and platform dependence.

Compute isn’t just a cost center. It’s a strategic lever. Whoever provides the infrastructure can influence deployment timelines, integration quality, and operational reliability. And whoever controls the infrastructure can shape the user experience indirectly—through latency, availability, and the ease with which developers can build on top of the models.

That’s why the fear of switching clouds is so consequential. If OpenAI moves workloads to Amazon, it may not only reduce Microsoft’s revenue. It may also reduce Microsoft’s ability to shape the deployment environment and the developer ecosystem around OpenAI’s models.

Even if Microsoft remains involved in some capacity, the “center of gravity” can shift. In AI, the center of gravity tends to follow the heaviest workloads: training runs, large-scale fine-tuning, and high-volume inference. Once those workloads move, the partnership becomes harder to reverse.

This is also why “shit-talk” is such a loaded fear. If OpenAI frames the move as a critique of Azure, it can influence how enterprises evaluate the platform. Enterprises don’t just buy models; they buy confidence that the platform will perform under load, integrate cleanly, and support production-grade operations.

The court record suggests Microsoft was trying to prevent that scenario early

The filing’s value isn’t only in the colorful phrasing. It’s in what the phrasing implies about Microsoft’s internal posture: Microsoft leadership appears to have been actively monitoring the risk of losing both technical and strategic control.

In early-stage partnerships, companies often focus on milestones and deliverables. But Microsoft’s internal discussions, as described, show a more mature understanding of partnership dynamics. They weren’t waiting for problems to emerge. They were anticipating them.

That anticipation is consistent with how large platform companies behave when they invest in startups. They want upside, but they also want to avoid becoming a captive supplier. They want to ensure that the startup’s success doesn’t turn into a future bargaining disadvantage.

And in the AI era, bargaining disadvantages can be especially painful because the infrastructure requirements are enormous. Training and scaling aren’t like shipping a software update. They involve massive compute budgets, specialized hardware, and operational complexity. If the relationship shifts, the cost of switching isn’t trivial.

So Microsoft’s fear of OpenAI leaving for Amazon reads like a fear of both immediate loss and long-term structural displacement.

A broader lesson: cloud loyalty is engineered, not assumed

There’s a temptation to treat cloud partnerships as if they’re natural outcomes of early investments. But the court documents reinforce a more realistic view: cloud loyalty is engineered.

It’s engineered through:

1) Performance: Does the platform deliver the throughput and latency needed for rapid iteration?
2) Speed of integration: How quickly can teams deploy new capabilities?
3) Cost predictability: Can the platform offer stable pricing at scale?
4) Operational reliability: Can it handle production demands without surprises?
5) Strategic alignment: Do both sides share incentives about product direction?

If any of these weaken, the startup’s “options” expand. And once options exist, the partnership can become transactional.

The court record suggests Microsoft understood this early. That’s why the fear wasn’t only about OpenAI leaving—it was about OpenAI leaving in a way that could damage Microsoft’s standing.

Why this matters now, even if the partnership ultimately succeeded

It’s easy to read these documents as a historical curiosity: Microsoft worried, OpenAI expanded, and eventually the partnership became central to the AI ecosystem. But the deeper significance