UPDATE: New York, April 8, 2026, 11:26 (EDT).
In late-morning trading, Broadcom was up 3.6% to $346.00, outpacing Nvidia’s 1.8% rise to $181.36, while Alphabet added 3.7% to $316.68.
A fresh development not in the original piece: Seaport downgraded Broadcom to Neutral, warning that AI chip suppliers may increasingly have to help finance giant data-center buildouts, a concern sharpened by Broadcom’s own disclosure that the Anthropic deployment is still being discussed with “operational and financial partners.” MarketWatch
Anthropic also said the Google-Broadcom expansion is its biggest compute commitment yet and that more than 1,000 business customers now spend over $1 million annually on Claude, double the level it disclosed in February.
Google Cloud said the added capacity will be delivered through its cloud services as well as Google-built TPUs supplied through Broadcom, which points to an AI-chip race increasingly fought at the full-stack platform level, not just at the chip level.
NEW YORK, April 8, 2026, 07:19 EDT
- Broadcom signed on for a long-term partnership to build Google’s upcoming TPUs and deliver AI rack components through 2031. Anthropic, for its part, secured rights to tap into multiple gigawatts of TPU computing power starting in 2027.
- Nvidia now estimates its Blackwell and Rubin chips could drive over $1 trillion in revenue by the end of 2027, with that figure not even factoring in Rubin Ultra. The company is pushing further into inference and agentic AI.
- Analysts expect Nvidia’s grip on inference to hold for now, but starting in 2027, custom ASICs—chips designed for particular tasks—could begin chipping away at its lead.
Nvidia ticked 0.3% higher to $178.10 just after 7 a.m. ET on Wednesday, but Broadcom stole the spotlight, jumping 6.3% to $333.97 after sealing its Google AI-chip deal. That fresh surge for Broadcom only sharpened focus on a key issue for Nvidia: how much of the coming AI investment will keep flowing to its GPUs. Nasdaq 100 futures were also rallying, up 3.16%, as traders digested news of a U.S.-Iran ceasefire.
In its April 6 filing, Broadcom pledged to work with Google for years to come, promising to develop and deliver future versions of Google’s tensor processing units—TPUs, the custom AI chips designed for Google’s internal needs. The deal also extends to supplying networking gear and additional components for Google’s next-gen AI racks, with the partnership locked in through 2031.
The filing, along with Anthropic’s own statement, makes clear just how fast spending is ramping up. The company said it will secure around 3.5 gigawatts of next-gen TPU-based compute from Broadcom starting in 2027. Claude’s run-rate revenue? Already over $30 billion, which is a leap from roughly $9 billion at 2025’s close. Anthropic also noted it’s running Claude on AWS Trainium, Google TPUs, and Nvidia GPUs.
This comes at a tricky juncture for Nvidia. Last month, Huang pegged the company’s potential revenue from Blackwell and Rubin at more than $1 trillion by 2027—a projection that notably leaves out Rubin Ultra. eMarketer analyst Jacob Bourne said that vision “underscores the durable demand” for Nvidia’s AI infrastructure. Nvidia isn’t slowing down, either; it’s leaning further into inference—the business of generating responses from trained models—and agentic AI, where software agents actually execute tasks for users. Reuters
Nvidia is pushing to blur the lines around custom silicon. On March 31, the company announced Marvell would be joining NVLink Fusion—a rack-scale setup that links semi-custom chips with Nvidia’s CPUs, networking, and the rest of its infrastructure. The move signals that Nvidia wants a bigger chunk of the custom-chip action plugged into its own stack.
Even so, analysts aren’t ignoring the pressure mounting in inference. Summit Insights’ KinNgai Chan, speaking with Reuters, noted Nvidia commands over 90% of the training and inference market right now. That grip could loosen by 2027, he said, as more firms ramp up in-house ASICs—custom chips that are starting to scale up, particularly in inference.
Broadcom isn’t shy about backing up its view with figures. Back in March, the company projected its AI chip revenue to top $100 billion by 2027, after pulling in $8.4 billion—more than double—from AI during its most recent quarter. CEO Hock Tan pointed to a “dramatically improved” line of sight into 2027. D.A. Davidson’s Gil Luria called the outlook a sign of surging demand. Reuters
Nvidia’s reach is tough to rival. Back in February, Reuters noted January-quarter sales soared 94% to $68.13 billion—well ahead of expectations. Even so, the company disclosed that just two customers made up 36% of fiscal 2026 sales, underscoring how fast things can shift when major cloud and AI buyers diversify their chip suppliers.
Execution risk is front and center here. Barron’s reports, quoting KeyBanc’s John Vinh, that qualification delays for next-gen HBM4 at suppliers SK Hynix and Micron may cut Nvidia’s 2026 Rubin GPU volume from 2 million to around 1.5 million units. Vinh hasn’t budged from his Overweight call, but a Rubin ramp-up that lags could open a window for competitors as customers start to branch out.
So far, Nvidia’s filings don’t point to demand slipping. Instead, the AI arms race is sprawling: Google is ramping up on TPUs, Anthropic is pushing Claude to run on three different chip families, and Nvidia’s moving to open up NVLink Fusion for a broader range of custom processors. In short, the competitive landscape is shifting—less about Nvidia alone, more about whose ecosystem proves toughest to swap out.