Nvidia Stock Stumbles as Google’s AI Chips Gain Ground — But Record Earnings Show Its Power

Nvidia Stock Stumbles as Google’s AI Chips Gain Ground — But Record Earnings Show Its Power

Updated November 28, 2025

Nvidia is still the king of AI chips — but for the first time in years, Wall Street is openly wondering how long that crown will stay secure.

Fresh off a blockbuster quarter with record revenue and data‑center sales, Nvidia’s share price has shed tens of billions of dollars in value in November as investors digest intensifying competition from Google, stricter China controls, a global memory crunch, and persistent chatter about an “AI bubble.”  [1]

Here’s how the world’s most valuable chipmaker is navigating its bumpiest month in a long time — and what it means for the future of AI infrastructure.


Nvidia Just Posted One of the Biggest Quarters in Corporate History

Despite the market jitters, Nvidia’s underlying business has never looked stronger.

In its fiscal third quarter of 2026 (three months ended October 26, 2025), Nvidia reported:  [2]

  • Total revenue of $57.0 billion, up 22% quarter‑over‑quarter and 62% year‑over‑year.
  • Data‑center revenue of $51.2 billion, up 25% from the previous quarter and 66% from a year earlier.
  • GAAP and non‑GAAP gross margins of about 73–74%, among the highest in big‑chip history.
  • Diluted earnings per share of $1.30.
  • Guidance for Q4 revenue around $65 billion, again above analyst expectations.  [3]

CEO Jensen Huang summed up the demand backdrop with a line that’s already doing the rounds in markets: Blackwell‑generation sales are “off the charts” and “cloud GPUs are sold out,” with compute demand accelerating for both AI training and inference.  [4]

On the latest industry MLPerf Training v5.1 benchmarks, systems based on Nvidia’s Blackwell architecture delivered the fastest time to train on every tested model, sweeping the entire suite — from giant Llama 3.1 models to recommendation systems and vision workloads.  [5]

Taken in isolation, this is the profile of a company in the absolute center of a once‑in‑a‑generation technology build‑out.

So why is the stock suddenly so shaky?


November Sell‑Off: Bubble Fears and a $200 Billion Question

Nvidia’s share price hit an all‑time high in late October. Since then, it has fallen roughly 10–12%, erasing nearly $200 billion in market value at one point, even as earnings smashed expectations.  [6]

Several overlapping worries are spooking investors:

  • Valuation and “AI bubble” talk
    Big‑name skeptics, including “Big Short” investor Michael Burry, have likened Nvidia to Cisco at the peak of the dot‑com bubble, questioning whether AI infrastructure spending and circular deals will ultimately justify today’s lofty market cap.  [7]
  • Circular AI deals
    Nvidia has invested in customers like OpenAI, Anthropic, and CoreWeave while also selling them massive amounts of GPUs and cloud capacity, creating a web of intertwined equity stakes and long‑term commitments. Some on Wall Street worry that this can inflate apparent demand and earnings power, echoing patterns seen before previous tech busts.  [8]
  • Profit‑taking after a historic run
    Nvidia became the first company to briefly cross the $4 trillion valuation mark earlier this year; November has seen some large investors exit or trim stakes, adding fuel to the pullback.  [9]

Yet even cautious commentators admit that the business itself remains extraordinarily strong, with cash flow closely tracking reported earnings and the AI build‑out still in its early stages.  [10]

The bigger structural risk now in front of Nvidia isn’t demand collapsing — it’s competition, especially from Google.


Google’s TPUs and the Meta Deal: Nvidia Meets Its Toughest Rival

The single most important headline for Nvidia this month: Meta may spend tens of billions of dollars on Google’s AI chips instead of Nvidia’s.

According to reporting from The Information and Reuters, Meta is in talks with Google to:  [11]

  • Rent Google’s tensor processing units (TPUs) via Google Cloud as soon as 2026.
  • Deploy TPUs in its own data centers from 2027, potentially representing a shift of billions of dollars in annual spending.
  • Give Google a shot at capturing up to 10% of Nvidia’s current annual AI‑chip revenue, based on internal Google Cloud estimates cited in the reports.  [12]

The market reaction was swift:

  • Nvidia shares fell around 4–6% on the news.  [13]
  • Alphabet’s stock jumped, with investors betting that TPUs can finally rival Nvidia’s GPUs at scale.  [14]

At the same time, Google’s launch of its Gemini 3 AI model — trained and served on its own TPU hardware — has showcased a credible alternative compute stack that promises lower costs for certain inference workloads.  [15]

Analyses from Barron’s, AI infrastructure commentators, and others argue that:  [16]

  • TPUs can deliver substantially better cost‑performance for large‑scale inference (serving AI models in production).
  • Nvidia still has the edge in flexible training and diverse compute workloads, thanks to its CUDA ecosystem and broad software support.
  • Big customers increasingly want both — GPUs for development and experimentation, TPUs for high‑volume inference — weakening Nvidia’s lock‑in over time.

In other words, Nvidia isn’t being replaced, but it is being surrounded.


Nvidia Pushes Back: “A Generation Ahead” and a “Unique Position”

Nvidia has not taken the TPU narrative lying down.

In comments to the press and in statements to investors, the company has stressed that its GPU platform remains “a generation ahead” of custom AI chips like Google’s TPUs, especially for cutting‑edge training workloads and multi‑cloud deployments.  [17]

Key talking points from Nvidia leadership over the last few days:

  • “We are a generation ahead” – In a statement reported by Business Insider, Nvidia said that while it’s “delighted” by Google’s success, its own chips remain ahead in performance and flexibility, and it continues to supply GPUs to Google Cloud as well.  [18]
  • “Unique position in global AI competition” – During a visit to Taipei on November 28, Jensen Huang told reporters that the AI market is extremely large, competition is intense, but Nvidia’s position is “very unique” and its condition “very strong.”  [19]
  • “AI ecosystem is scaling fast” – On the Q3 earnings call, Nvidia argued that demand from hyperscalers, startups, and enterprises is compounding, and that the company expects to be a leading platform for a projected $3–4 trillion in annual AI infrastructure spending later this decade.  [20]

Nvidia also has hard numbers to back up its confidence:

  • Blackwell architecture systems have swept recent MLPerf Training benchmarks, including training a 405‑billion‑parameter Llama 3.1 model in just 10 minutes with 5,120 GPUs.  [21]
  • The GB200 NVL72 rack‑scale system can deliver up to 30x faster inference on trillion‑parameter LLMs compared with previous‑generation Hopper GPUs, underpinned by fifth‑generation NVLink interconnects.  [22]

Still, the optics have shifted: Nvidia is now on the defensive in the narrative battle, even if the numbers are still firmly on its side.


China: ByteDance Ban Shows How Fast the Ground Is Moving

If Google is the competitive threat, China is the geopolitical one — and it’s getting sharper.

On November 26, Reuters reported that Chinese regulators have barred ByteDance (TikTok’s parent company) from using Nvidia chips in new data centers, pressuring the company to adopt domestic AI processors instead.  [23]

Key details from that report:  [24]

  • ByteDance bought more Nvidia chips in 2025 than any other firm in China.
  • The ban is part of Beijing’s broader drive to reduce reliance on U.S. technology, especially as Washington tightens export controls on advanced semiconductors.
  • Chinese regulators had already asked local firms to halt new purchases of Nvidia AI chips and pushed them towards local vendors.
  • New state‑funded data‑center projects in China are now required to use only domestically produced AI chips.

This comes on top of a complex export‑control regime from Washington that has already:  [25]

  • Restricted Nvidia’s highest‑end H100 / H200 / Blackwell‑class chips from being sold to China.
  • Forced Nvidia to create a series of “China‑only” chips (A800/H800, then H20/L20/L2, and potentially B30 variants) with downgraded interconnect and performance.
  • Led to additional friction, with Chinese regulators raising “security concerns” about some of those downgraded chips and even blacklisting certain models.

Commentary from analysts and independent writers argues that China’s major tech giants are rapidly investing in their own alternatives — such as Huawei’s Ascend line — and may eventually need far fewer Nvidia chips than before, regardless of U.S. export policy.  [26]

For Nvidia, China has gone from being one of its most promising growth markets to one of its most unpredictable.


Supply Crunch: Memory Shortages, Wafer Orders, and $26B of Cloud Spend

Another structural tension for Nvidia is supply. Demand may be enormous, but GPUs don’t appear from thin air — and the supply chain is straining.

Rumored VRAM Policy Change for Board Partners

A report from Tom’s Hardware this week, citing a Weibo leaker, claims that Nvidia has stopped bundling video memory (VRAM) with the GPU dies it sells to add‑in‑board partners, forcing them to source memory separately from suppliers like Samsung, Micron, or SK Hynix.  [27]

The backdrop:

  • AI data centers are absorbing a massive share of the world’s advanced DRAM and HBM capacity.
  • Memory makers have sharply raised prices, and retail GPU channels are already feeling the squeeze.  [28]
  • For larger board partners, independent VRAM sourcing is manageable; smaller vendors may struggle, potentially leading to higher prices or consolidation in the GPU aftermarket.  [29]

The report is still labeled as a rumor, but it lines up with a broader pattern: AI is hijacking memory and packaging capacity, and everything else — including consumer gaming GPUs — is adapting around that fact.

TSMC Capacity and Pre‑Booked Wafers

On the manufacturing side, analysis from independent semiconductor observers suggests that:  [30]

  • Blackwell and Blackwell Ultra GPUs use TSMC’s most advanced nodes (including custom 4NP and 3nm‑class processes).
  • Nvidia has reportedly boosted its wafer orders by about 50% for 2024–2025, locking up a large share of TSMC’s leading‑edge capacity.
  • This pre‑booking could limit rival access to cutting‑edge AI processes through 2026, but it also means there is noquick way to dramatically increase global GPU supply: fabs are essentially full, and demand is still rising.

Renting the Cloud: $26 Billion for External GPU Capacity

Nvidia isn’t just selling chips; it’s also becoming one of the biggest buyers of cloud compute.

A recent regulatory filing shows that Nvidia has doubled its commitment to rent cloud‑based compute, planning to spend about $26 billion over the next six years on servers hosted by major cloud providers.  [31]

This strategy helps Nvidia:

  • Secure guaranteed access to GPUs for its internal R&D, software, and services.
  • Deepen ties with cloud partners that also resell its hardware.
  • Position itself as both an arms dealer and a heavy user inside the AI ecosystem.

But it also raises questions about how much of the AI boom is being financed by intertwined commitments between the same handful of big players.


The Roadmap: Blackwell, Rubin, and Feynman

Underneath all the near‑term drama, Nvidia continues to roll out an aggressive multi‑year roadmap.

  • Blackwell (shipping now)
    • 208‑billion‑transistor GPUs built on a custom TSMC 4NP process.
    • Second‑generation Transformer Engine and FP4 support for ultra‑efficient training and inference.
    • Fifth‑generation NVLink that can connect up to 576 GPUs, enabling trillion‑parameter models to train and run as if on a single giant accelerator.  [32]
  • Rubin (targeted for 2026)
    • Nvidia’s next‑generation platform after Blackwell, expected to deliver further performance gains; management has suggested that combined Blackwell‑plus‑Rubin revenue could exceed a previously floated $500 billion target through the end of 2026.  [33]
  • Feynman (expected 2028)
    • A future GPU microarchitecture announced at GTC 2025, to be manufactured by TSMC with high‑bandwidth memory and design work accelerated using Nvidia’s own Blackwell GPUs.  [34]

Nvidia has effectively turned its GTC conference into the “Super Bowl of AI,” where each new architecture — Hopper, Blackwell, Rubin, Feynman — is revealed as a broader vision for where AI compute is going over the next decade.  [35]


What It All Means for Nvidia — and for AI

From a structural perspective, Nvidia now sits at the intersection of at least four powerful forces:

  1. Unrelenting AI demand
    Every major cloud provider, social platform, enterprise software giant, and many startups are racing to build AI “factories,” and Nvidia remains the default hardware choice for training large‑scale models. Q3 results and guidance show that this demand is far from saturated.  [36]
  2. Rising competition from custom silicon
    Google’s TPUs, Amazon’s Trainium/Inferentia, Meta’s in‑house chips, and China’s domestic accelerators are all trying to peel off slices of AI workloads — especially inference. Nvidia’s moat is its software stack (CUDA, libraries, tools) and its ubiquity across clouds, but the share of AI compute not running on Nvidia silicon is clearly growing.  [37]
  3. Geopolitical and regulatory pressure
    Export controls, China‑specific chip variants, ByteDance‑style bans, and potential restrictions on Blackwell‑class exports are reshaping where Nvidia can sell its most profitable products. Policy risk now sits alongside technology risk on the company’s strategic dashboard.  [38]
  4. Supply‑chain and capital intensity
    Locking up TSMC capacity, navigating a brutal memory shortage, and committing tens of billions of dollars to cloud compute make Nvidia both unbelievably powerful and deeply entangled in global manufacturing and finance.  [39]

For the broader AI industry, Nvidia’s current wobble is less about the end of the GPU era and more about the end of Nvidia’s uncontested dominance. A world where Google, Amazon, Chinese champions, and others all provide viable alternatives is one where:

  • Pricing power gradually shifts from “take‑it‑or‑leave‑it” to negotiated, multi‑vendor deals.
  • Software portability (PyTorch, JAX, and cross‑platform runtimes) becomes as important as raw FLOPS.
  • The AI hardware stack starts to look less like a single‑vendor standard and more like a competitive market.

Nvidia’s bet is that, even in that world, its combination of performance, ecosystem, and scale will keep it at the center.


Key Things to Watch Over the Next 12 Months

For readers tracking Nvidia, NVDA stock, or the AI chip race more broadly, here are the main storylines to monitor:

  • Whether the Meta–Google TPU deal actually closes, and at what scale. A finalized, multi‑year agreement would be the clearest sign yet that hyperscalers are serious about reallocating spend away from Nvidia.  [40]
  • How quickly Google can turn TPU buzz into broad third‑party adoption. If more big AI companies and enterprises adopt TPUs for inference, Nvidia’s share of that segment could erode faster than expected.  [41]
  • Further China‑related actions. Additional guidance from Washington or Beijing on AI chips — or more bans like ByteDance’s — could reshape Nvidia’s revenue mix and push China further toward domestic solutions.  [42]
  • Signs of easing (or worsening) supply constraints. Watch for updates on HBM availability, memory pricing, and TSMC capacity — as well as confirmation or denial of Nvidia’s rumored changes to VRAM supply for board partners.  [43]
  • Execution on Blackwell and the transition to Rubin. If Nvidia can keep Blackwell ramping smoothly, maintain its MLPerf lead, and roll Rubin out on schedule, it strengthens the argument that it can outrun competition by sheer technological pace.  [44]

Right now, Nvidia is still the central pillar of the AI era. But November 2025 may go down as the month when the rest of the ecosystem — from Google and Meta to Beijing and Washington — finally proved it has real leverage too.

References

1. nvidianews.nvidia.com, 2. nvidianews.nvidia.com, 3. nvidianews.nvidia.com, 4. nvidianews.nvidia.com, 5. developer.nvidia.com, 6. m.economictimes.com, 7. m.economictimes.com, 8. www.investopedia.com, 9. www.businessinsider.com, 10. m.economictimes.com, 11. www.reuters.com, 12. www.reuters.com, 13. www.reuters.com, 14. www.reuters.com, 15. www.barrons.com, 16. www.barrons.com, 17. www.digitimes.com, 18. www.businessinsider.com, 19. focustaiwan.tw, 20. www.investopedia.com, 21. developer.nvidia.com, 22. www.nexgencloud.com, 23. www.reuters.com, 24. www.reuters.com, 25. www.congress.gov, 26. medium.com, 27. www.tomshardware.com, 28. www.tomshardware.com, 29. www.tomshardware.com, 30. macaron.im, 31. techstrong.it, 32. www.nvidia.com, 33. www.investopedia.com, 34. en.wikipedia.org, 35. en.wikipedia.org, 36. nvidianews.nvidia.com, 37. www.reuters.com, 38. www.reuters.com, 39. macaron.im, 40. www.reuters.com, 41. www.barrons.com, 42. www.reuters.com, 43. www.tomshardware.com, 44. developer.nvidia.com

Autodesk (ADSK) Stock Today: UBS Hikes Price Target to $400 as Analysts Upgrade Forecasts
Previous Story

Autodesk (ADSK) Stock Today: UBS Hikes Price Target to $400 as Analysts Upgrade Forecasts

Google in 2025: Gemini 3, $4 Trillion Valuation Race and the New AI-Powered Future of Search
Next Story

Google in 2025: Gemini 3, $4 Trillion Valuation Race and the New AI-Powered Future of Search

Go toTop