New York, Feb 3, 2026, 08:55 EST — Premarket
- NVDA slipped roughly 2.9% in premarket trading, hitting $185.61.
- A report highlighted OpenAI’s efforts to trial non-Nvidia hardware for certain AI “inference” tasks
- Big Tech earnings will grab attention this week, with Nvidia’s results coming in on Feb. 25
Nvidia shares slipped 2.9% to $185.61 in premarket trading Tuesday, shedding $5.45 from Monday’s close.
OpenAI’s shift comes after fresh reports that it’s grown frustrated with the speed of some of Nvidia’s newest chips for “inference” — the stage where a trained AI model generates an output — according to eight sources familiar with the situation. Since last year, OpenAI has been weighing alternatives from Advanced Micro Devices and startups Cerebras and Groq, focusing on chip designs that cram more memory onboard. One source suggested new hardware could eventually handle about 10% of its inference computing demands. Nvidia insists customers stick with its gear because it offers “the best performance and total cost of ownership at scale” — basically, the combined cost to buy and operate the systems. OpenAI confirmed Nvidia powers most of its inference fleet. CEO Sam Altman tweeted that Nvidia makes “the best AI chips in the world.” (Reuters)
Why it matters: inference is the first place usage surfaces, and where major clients can begin diverting orders. Any sign OpenAI is hunting for faster options elsewhere throws Nvidia’s dominance into sharp relief—just as Wall Street tries to decode what “AI demand” will actually mean in 2026.
The move follows a robust Monday for U.S. stocks, boosted by chipmakers and other AI-related companies that pushed the S&P 500 up 0.54%. SanDisk jumped 15.4%, Micron Technology rose 5.5%, and big tech names like Alphabet and Amazon, set to report earnings this week, drew attention. (Reuters)
Nvidia remains the leader in chips for training massive AI models. Yet, the battle over inference is heating up—it’s shifting away from raw training power toward factors like latency, memory capacity, and the ongoing costs of delivering responses around the clock.
For OpenAI, this plays out in products where speed matters as much as accuracy. For Nvidia, it’s a signal that even their largest clients are loyal—and demanding.
Daniel O’Regan, an analyst at “Mizuho,” noted the news “hurts sentiment around a key customer of [Nvidia] in [OpenAI].” (Investing)
Another risk lingers: investors remain divided on whether the massive AI infrastructure investments will deliver, given scant evidence yet of widespread productivity boosts. Oracle highlighted this caution about AI spending in recent remarks linked to its data-center expansion plans. (Reuters)
Traders are zeroing in on two key questions: will this spark genuine changes in purchasing, and can competitors deliver enough fast inference capacity to make a difference? In the chip world, talk stays cheap until it translates into actual orders.
Nvidia’s upcoming earnings report is the next major event to watch. The company will host a conference call on Feb. 25 at 2 p.m. PT (5 p.m. ET) to go over its fourth-quarter and full-year fiscal 2026 results. (NVIDIA Investor Relations)
For now, the market is focused on one key question: are top AI clients ramping up their Nvidia purchases, or diversifying beyond just Nvidia?