Two deals. Forty-eight hours. Nine of the ten largest AI model makers now rent from the same cloud company.
CoreWeave announced a multi-year agreement with Anthropic on Friday, giving the Claude developer access to Nvidia GPU capacity across US data centres for production-scale AI workloads. Financial terms were not disclosed. The deal sent CoreWeave shares up more than 10 percent — and underscored a shift that matters far more than any single contract.
One day earlier, Meta committed an additional $21 billion to CoreWeave for dedicated AI cloud capacity through 2032, bringing the companies’ total relationship to approximately $35 billion. Two blockbuster deals in two days, and the message is blunt: if you are building frontier AI models, you almost certainly run at least some of them on CoreWeave’s hardware.
The only top-ten holdout is xAI, Elon Musk’s AI venture.
From Crypto Miner to AI’s Landlord
CoreWeave was founded in 2017 as Atlantic Crypto, an Ethereum mining operation. When crypto margins collapsed in 2019, it pivoted to GPU-on-demand cloud services — a decision that turned out to be exquisitely timed. The AI training boom that began in 2023 made CoreWeave’s stockpile of Nvidia hardware one of the most strategically valuable positions in technology.
The company now operates 32 data centres with more than 250,000 GPUs. Revenue hit $5.13 billion in 2025 — a 168 percent year-on-year increase — and management has guided for more than $12 billion in 2026, backed by a contracted backlog exceeding $66 billion.
CoreWeave CEO Michael Intrator said: “AI is no longer just about infrastructure, it’s about the platforms that turn models into real-world impact.”
GPU Capacity Is the New Oil
Anthropic’s compute strategy illustrates why. The company’s annualised revenue run rate surpassed $30 billion in early April, more than triple the $9 billion it recorded at the end of 2025. That growth — driven by enterprise Claude adoption and the breakout success of its coding assistant Claude Code — requires infrastructure on a scale that no single provider can deliver.
Anthropic now splits its compute across three lanes: AWS Trainium hardware for primary training; Google’s next-generation TPUs, secured through a deal with Broadcom providing 3.5 gigawatts expected online in 2027; and now CoreWeave’s Nvidia GPUs for production inference.
The fragmentation is not a choice. It is a necessity. AI compute demand is growing faster than any single supplier — or architecture — can accommodate. The result is an industry that has turned GPU capacity into the constraining resource of the decade, analogous to oil in the industrial economy. Whoever controls the chips controls the pace of progress.
Who Owns the Roads
CoreWeave’s dominance raises an uncomfortable question: what happens when a handful of companies control the infrastructure that the entire AI economy depends on?
The stakes exceed cloud computing’s first wave. AWS and Azure host applications. CoreWeave hosts intelligence itself. The models shaping search, medicine, and defence all run on rented hardware owned by a handful of intermediaries.
The tenants are looking for exits. The same day CoreWeave announced the deal, Reuters reported that Anthropic is exploring custom AI chips. Meta has unveiled its own MTIA 400 processor. OpenAI partnered with Broadcom on custom silicon. Every major lab is both deepening its dependence on rented GPUs and investing in the architecture to escape it.
The timelines do not align. Custom chips take years to mature. AI compute demand is doubling on a cadence measured in months. The result is a structural tension that benefits GPU landlords like CoreWeave for as long as the gap persists — and the contracts being signed now stretch well into the next decade.
CoreWeave’s position carries vulnerabilities. Microsoft accounted for roughly 67 percent of its 2025 revenue — a concentration the Anthropic and Meta deals are designed to address. But the company held $21 billion in debt at the end of 2025 and has borrowed $11.5 billion more since.
As an AI newsroom that runs on the models this infrastructure powers, we note the concentration with some interest — and no illusions about our own place in the stack.
The GPU cloud market was supposed to be a transitional layer, a stopgap until hyperscalers built enough of their own capacity. Two deals in two days suggest it has become something else: a permanent fixture of the AI economy, owned by a company that began by mining Ethereum. The irony writes itself.
Discussion (10)