NEW YORK, April 30, 2026, 17:10 EDT
- Nvidia dropped 4.6%. AMD and Broadcom moved higher, a divergence reflecting investor jitters over the push for custom AI chips.
- Alphabet reported a 63% surge in Google Cloud revenue and noted it’s now selling TPU chips straight to select customers.
- Big Tech’s AI outlays keep rising, but the debate has shifted—investors want to know who’s actually seeing the payoff.
Nvidia dropped 4.6% Thursday, buckling despite fresh Big Tech earnings suggesting AI investment is still climbing. The catch? Investors zeroed in on a growing threat—key Nvidia customers are rolling out or marketing their own competing chips. Nvidia closed regular trading at $199.57. AMD added 5.1%, Broadcom picked up 3.0%.
This shift is significant: the AI story has moved past simply tracking Microsoft, Alphabet, Amazon, and Meta’s continued investment in data centers—they’re all still pouring money in. The real question: Will Nvidia’s GPUs, which power the training and operation of AI models, keep soaking up that investment? Or do cloud giants start steering more of those dollars into their own custom chips?
Alphabet led the pack. Google Cloud clocked a 63% jump in first-quarter revenue, outpacing Microsoft Azure’s 40% and Amazon Web Services’ 28%. CEO Sundar Pichai also mentioned that Google began selling its TPU chips straight to certain customers. These tensor processing units—custom AI hardware from Google—go up against Nvidia’s GPUs.
Alphabet CEO Sundar Pichai told analysts that enterprise AI offerings have now overtaken as the main engine for cloud growth, Reuters reported. The company bumped up its capital spending outlook for 2026 to $180 billion–$190 billion, CFO Anat Ashkenazi noted, but she cautioned that just a small slice of revenue from TPU sales contracts will show up this year, with the bulk delayed to 2027.
Amazon turned up the heat. CEO Andy Jassy said the company’s chips division is now running at an annual revenue pace of over $20 billion. Commitments for Trainium, Amazon’s AI-focused chip, have already crossed $225 billion. Over the last year, Amazon picked up more than 2.1 million AI chips—over half of those were Trainium. Even with those numbers, the company still intends to roll out more than a million Nvidia GPUs starting in 2026.
That’s where things get tricky for Nvidia. AI compute demand is still outpacing supply, yet cloud giants are hunting for bargaining power, cheaper options, and ways to avoid leaning too hard on any one vendor. Lee Sustar, principal analyst at Forrester, told Reuters that Google is picking up new workloads from clients eager to cut dependence on a single cloud platform—they’re drawn to Google’s AI stack, from chips and data centers to models and dev tools.
Nvidia’s outlook remains buoyed by strong spending trends. According to Reuters, the four major U.S. tech firms that reported earnings Wednesday all indicated AI investments aren’t letting up, pushing their combined forecasted spend above $700 billion this year—an increase from the prior $600 billion estimate. Microsoft CFO Amy Hood put it bluntly: demand for Azure’s AI offerings “continues to exceed supply” and keeps broadening. Reuters
Meta bumped up its 2026 capex outlook to a range of $125 billion to $145 billion, higher than the earlier $115 billion to $135 billion, pointing to pricier components and rising data-center expenses. CEO Mark Zuckerberg called out “strong momentum” across Meta’s app lineup and flagged the debut of the first model from Meta Superintelligence Labs. First-quarter revenue hit $56.31 billion, a 33% increase. Meta
Meta’s stock took a hit, and right away, investors started drawing sharper boundaries within the AI space off that same round of earnings. “The risk of sitting it out is bigger than the risk of leaning in,” Daniel Newman, CEO of Futurum Group, said to Reuters. Still, that leaves a key question on the table: who reaps the rewards first—the chipmakers, the cloud providers, or the software players building with these models? Reuters
Scarcity is again playing in Nvidia’s favor. According to Reuters on Thursday, the company’s B300 servers are fetching roughly 7 million yuan—about $1 million—in China, almost twice what they command in the U.S., as demand surges and American export restrictions bite, industry sources said. Nvidia told Reuters the B300 isn’t allowed for sale in China, adding that “unlawful diversion is a recipe for failure.” Reuters
The worry: scarcity and big budgets don’t guarantee fat margins forever. Should Alphabet’s TPUs, Amazon’s Trainium, or silicon built through Broadcom-backed projects start capturing a bigger slice of AI workloads, Nvidia might keep moving product but see pricing power erode at the margins. If AI investments take longer to pay off—or export restrictions tighten further—the bullish case for the stock gets tougher to justify.
At this point, Nvidia isn’t playing the sole gatekeeper to AI infrastructure—investors are seeing it as the heavyweight in a pack that’s getting crowded. Its market cap sits near $4.9 trillion. Still, Thursday’s slide made it clear: those massive AI budgets are a double-edged sword, especially when your customers start turning into rivals.