A stumble by OpenAI rattles an AI market built on its momentum

OpenAI, the company whose rise helped ignite the market’s fervor for artificial intelligence, is prompting a fresh reassessment across Wall Street and Washington after reports that it fell short of some internal targets for revenue and user growth, even as it widened the channels through which it sells its technology.

The development landed with unusual force because OpenAI has come to occupy an outsized place in the economics of the AI boom. Investors have treated the company as a central source of demand for the chips, cloud capacity and data-center construction underpinning the industry’s expansion. So when reports surfaced on April 28 that OpenAI had missed some internal projections — including, according to the reports, an ambitious goal of reaching one billion weekly ChatGPT users by the end of 2025 — shares of companies tied to that build-out fell, among them Oracle, AMD, Nvidia, CoreWeave and SoftBank.

OpenAI disputed the characterization that its business had stumbled in any meaningful way, with executives saying the company continued to grow strongly. But the reaction in the market made clear how tightly the fortunes of a broad swath of technology companies have become tied to assumptions about OpenAI’s trajectory.

That sensitivity is now colliding with another shift: OpenAI is no longer distributing its models through a single dominant cloud partner.

Microsoft’s grip loosens as Amazon gains access

A day before the reports of missed targets rippled through markets, OpenAI and Microsoft said they had revised their relationship in ways that substantially broaden OpenAI’s commercial freedom. Microsoft remains OpenAI’s primary cloud partner, but Azure will no longer have exclusivity. OpenAI can now offer products across other cloud providers, and Microsoft’s license to OpenAI’s intellectual property has become nonexclusive.

The financial terms also changed in important ways. Microsoft will no longer pay a revenue share to OpenAI, while OpenAI’s revenue-share payments to Microsoft continue through 2030, subject to a cap.

The next day, Amazon Web Services said OpenAI’s latest models, Codex, and Bedrock Managed Agents powered by OpenAI would be added to Amazon Bedrock in limited preview, expanding an earlier partnership between the companies. The move gives OpenAI access to a broader customer base and gives Amazon a marquee model provider to offer enterprise clients.

Taken together, the changes suggest that OpenAI is moving from a structure that closely bound it to Microsoft toward one that looks more like a conventional platform strategy: sell as widely as possible, across clouds and channels, to maximize adoption and revenue.

For Microsoft, the revised arrangement preserves a privileged relationship but dilutes its role as sole gatekeeper. For Amazon, it is a chance to strengthen Bedrock, its managed AI service, at a moment when large customers are increasingly reluctant to commit themselves to a single model provider.

The AI trade meets a new reality

The combination of softer growth signals and broader distribution is reshaping what had been one of the market’s clearest narratives.

For much of the past two years, the “AI trade” rested on a relatively simple chain of logic: OpenAI’s products would attract ever larger numbers of users and paying customers; that demand would translate into enormous spending on computing infrastructure; and the beneficiaries would include chipmakers, cloud providers, data-center operators and energy suppliers. Any evidence that one link in that chain is weaker than expected can reverberate through the whole complex.

That appears to be what happened when investors absorbed the reports on OpenAI’s internal forecasts. Even if the missed targets reflect aggressive planning assumptions rather than a fundamental slowdown, they raised a deeper question: whether demand for generative AI is scaling as smoothly and profitably as markets had assumed.

The issue matters especially because OpenAI has made heavy compute commitments. If revenue growth is more uneven than expected, investors may begin to scrutinize whether the enormous capital spending justified by AI enthusiasm will generate returns quickly enough.

Yet the same developments that unsettled investors could, over time, strengthen OpenAI’s position. Selling through multiple clouds broadens distribution, reduces dependence on any one partner and may help the company reach more corporate buyers. The challenge is one of timing: whether a wider route to market can offset near-term concerns over growth and monetization.

In Washington, the Pentagon is making a similar calculation

The shift away from concentration is not confined to Silicon Valley. At the Pentagon, officials are increasingly embracing the idea that relying too heavily on a single AI model is risky.

The Defense Department’s chief digital and AI officials have recently confirmed an expanded use of Google’s Gemini models through the GenAI.mil platform, a sign that the government is broadening the set of tools available across a large internal user base. The message from officials has been direct: dependence on one model provider is “never a good thing.”

That view reflects both technical and political concerns. Different models excel at different tasks, and government users often want flexibility as models improve, costs shift and security requirements evolve. There has also been internal pressure within the department to keep providers on a common baseline and avoid allowing any single company to become too deeply embedded in critical workflows.

The Pentagon’s diversification push follows months of debate over which companies should be entrusted with sensitive government workloads and under what terms. It also mirrors what many large commercial customers are now demanding: multi-model, multi-vendor arrangements that preserve bargaining power and reduce operational dependence.

What comes next

Several questions remain unresolved.

One is whether OpenAI’s reported shortfall reflects a temporary miss against aggressive internal goals or something more structural, such as intensifying competition from rivals including Anthropic and Google. Another is whether the company’s newly broadened cloud distribution will translate into enough incremental sales to reassure investors and partners whose businesses have been built around rising AI capital expenditure.

The Pentagon’s moves raise their own uncertainties. While defense officials have signaled a desire for diversification, it remains unclear which model makers will secure the most sensitive work, and how procurement, security standards and usage restrictions will shape that competition.

For now, the week’s developments have exposed a market and a policy apparatus moving in the same direction. The first phase of the AI boom was defined by concentration — in a few labs, a few cloud providers and a handful of market assumptions. The next phase may be defined less by singular winners than by dispersion: across platforms, across customers and across the institutions deciding that no one model should sit at the center of everything.

Sources

Further reading and reporting used to add context: