SAN FRANCISCO, April 21, 2026, 09:46 PDT
Nvidia slipped $1.98 to $200.08 early Tuesday, reacting to news that Alphabet’s Google is reportedly negotiating with Marvell Technology on fresh AI chips—a move suggesting that key clients remain eager to diversify away from Nvidia hardware. That pullback trimmed Nvidia’s market cap to around $4.53 trillion.
Timing here is key. With Google Cloud Next kicking off in Las Vegas on April 22, attention swings to Google’s tensor processing units, or TPUs, right as the market pivots toward inference—the phase where trained models start producing responses for users.
Google’s Chief Scientist Jeff Dean thinks there’s logic in making chips specifically for either training or inference tasks. Chirag Dekate at Gartner notes, “the battleground is shifting towards inference,” emphasizing that speed, power efficiency, and cost are now as important as sheer computing scale. Los Angeles Times
Google is in talks about a memory processor that could work alongside its TPUs, as well as a next-gen TPU aimed at running models with greater efficiency, according to The Information via Reuters. The tech giant already has an extensive TPU supply agreement with Broadcom running through 2031—another sign of how cloud players are ramping up their custom chip efforts beyond Nvidia’s GPUs, which remain the backbone for AI model training and inference. Google and Marvell did not provide comments to Reuters, which also noted it had not independently confirmed the reported discussions.
Even a slight sign of customer diversification draws attention, given Nvidia’s data center-fueled run. Back in February, the company posted a record $68.1 billion for the fourth quarter, with $62.3 billion coming straight from data center. Looking ahead, Nvidia projected $78 billion for this quarter, noting it isn’t counting on any data-center compute revenue from China.
That’s not a sign demand is slipping. Both TSMC and ASML lifted their forecasts last week. TSMC CEO C.C. Wei pointed out cloud clients are giving a “very strong signal” on AI investments — clear evidence hyperscalers aren’t letting up. Reuters
Morgan Stanley on Sunday flagged that agentic AI — think systems handling more planning and action with minimal human input — might push chip spending past just GPUs to include CPUs and memory, even as appetite for graphics chips remains steady. Intel, which supplies server processors that pair with Nvidia’s GPUs, has previously cautioned that the tightest supply pinch for those chips showed up in the first quarter.
Nvidia isn’t staring down a sudden drop in orders. The real risk: it could see its pace of taking share in inference slip if major clients lean more into custom silicon. “Rivals … will want to grab a piece of the market,” said Russ Mould, investment director at AJ Bell. Customers are also looking to broaden their supplier base. Reuters
Even so, Google isn’t poised to unseat Nvidia any time soon. Executives say Google runs a combination of TPUs and GPUs, while Meta is just beginning to trial TPUs for select tasks. Plus, Google has its own supply constraints as appetite for these chips keeps climbing.
Nvidia doesn’t plan to lose out if clients start building their own chips. Last month, the company said it would expand its partnership with Marvell using NVLink Fusion—a system that hooks custom chips directly into Nvidia’s servers and networking hardware. Reuters reported Nvidia put $2 billion into Marvell, aiming to smooth that integration for customers.