Broadcom will design and supply Google’s custom AI chips through the end of the decade, under a long-term agreement announced Monday that signals just how committed big tech remains to building its own silicon rather than feeding Nvidia’s margins forever.

The deal covers future generations of Google’s tensor processing units — the homegrown chips the search giant uses to train and run AI models — as well as other components for Google’s next-generation AI server racks. It runs through 2031. Financial terms were not disclosed.

Broadcom shares rose roughly 3% in after-hours trading.

The Anthropic Connection

Alongside the Google agreement, Broadcom announced an expanded deal with Anthropic that will give the AI startup access to approximately 3.5 gigawatts of computing capacity powered by Google’s TPUs, starting in 2027. That is a staggering amount of infrastructure — for context, 3.5 gigawatts is roughly equivalent to the peak demand of a mid-sized US state.

Anthropic’s chief financial officer, Krishna Rao, called the partnership “a continuation of our disciplined approach to scaling infrastructure” in a blog post. The company said most of the new compute will be sited in the United States, building on a November 2025 commitment to invest $50 billion in domestic computing infrastructure.

Amazon Web Services remains Anthropic’s primary cloud provider and training partner, the startup was careful to note — a reminder that these deals are additive, not exclusive. Anthropic runs its Claude models across AWS Trainium chips, Google TPUs, and Nvidia GPUs.

The Numbers Behind the Demand

The infrastructure spending is being pulled by explosive revenue growth. Anthropic said its run-rate revenue has surpassed $30 billion, tripling from approximately $9 billion at the end of 2025. Over 1,000 business customers are now each spending more than $1 million annually on Anthropic products — a figure that has doubled in less than two months, according to the company.

Analysts at Mizuho, led by Vijay Rakesh, estimated after Broadcom’s last earnings call that the company could generate $21 billion in AI revenue from Anthropic alone in 2026, and $42 billion in 2027. Those are analyst projections, not confirmed figures — Broadcom’s Monday filing did not include dollar amounts.

What This Means for Nvidia

The deals underscore a structural shift in how the biggest buyers of AI compute are thinking about hardware. Google’s push to make TPUs a viable alternative to Nvidia’s market-leading GPUs has been underway for years. Reuters reported in December that TPU sales had become a crucial growth engine for Google’s cloud revenue, as the company works to demonstrate to investors that its AI capital expenditure is generating returns.

Broadcom is now the common thread connecting multiple challengers to Nvidia’s position. In addition to the Google and Anthropic deals, the chip designer is collaborating with OpenAI on custom silicon, according to CNBC. OpenAI has also committed to drawing on six gigawatts of AMD’s GPUs — a separate effort to diversify away from Nvidia dependence.

None of this means Nvidia is in trouble. The company’s GPUs still power the vast majority of AI training and inference worldwide. But the direction of travel is clear: the largest consumers of AI compute are working systematically to reduce their reliance on a single supplier, and they are willing to sign decade-long contracts to do it.

Broadcom’s Emerging Monopoly on Custom Silicon Design

What makes Broadcom’s position unusual is that the company isn’t trying to compete with Nvidia directly. It doesn’t sell general-purpose GPUs. Instead, it designs custom chips for specific clients — and in doing so, it has quietly assembled a portfolio of relationships with nearly every major AI player that wants to break free of Nvidia’s pricing power.

Google, Anthropic, and OpenAI are all working with Broadcom. The pattern is consistent: big tech builds its own silicon, and Broadcom is the design shop they keep turning to.

The Spending Question Nobody Is Asking

The implicit bet across all of these deals is that AI demand will continue to grow fast enough to absorb the enormous infrastructure being built. Anthropic’s revenue numbers suggest that, at least for now, demand is outpacing even aggressive projections. But 3.5 gigawatts of capacity coming online in 2027 is a forward commitment based on current growth rates — and growth rates in technology have a habit of flattening when the bill arrives.

For the moment, though, the money is real and the contracts are signed. Broadcom just locked in a decade of work with one of the world’s largest buyers of chips, and the market for AI compute that doesn’t bear Nvidia’s logo keeps getting bigger.

Sources