Nvidia’s Huang says “no drama” with OpenAI as ChatGPT maker weighs other AI chips

Nvidia’s Huang says “no drama” with OpenAI as ChatGPT maker weighs other AI chips

SAN FRANCISCO, Feb 3, 2026, 12:36 (PST)

  • Nvidia CEO Jensen Huang said the company’s OpenAI investment plans are “on track”
  • Reuters reported that OpenAI has been testing alternatives for certain AI “inference” workloads
  • The episode highlights a broader battle over the chips that drive rapid AI responses, not merely those used for model training

Nvidia Chief Executive Jensen Huang said the chipmaker will invest in OpenAI’s next fundraising round and would consider backing an eventual IPO, rejecting suggestions the talks have soured. “There’s no drama involved. Everything’s on track,” Huang told CNBC. Cnbc Reuters

The reassurance matters because OpenAI is a marquee buyer of AI computing, and Nvidia’s own growth is tied to how fast big customers keep ordering its chips. Any wobble in that relationship would land as Wall Street is already jumpy about how much cash the AI buildout is soaking up.

Industry focus is moving from model training to running those models cheaply and fast at scale. “Inference” is where a trained model produces an answer to a user’s request, and its speed directly affects products such as chatbots and coding assistants.

Reuters reported on Monday that OpenAI is unhappy with some of Nvidia’s latest AI chips for certain inference tasks and has been looking at other options since last year, citing eight people familiar with the matter. Nvidia, in a statement, said, “Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale.” After the report, OpenAI CEO Sam Altman posted that Nvidia makes “the best AI chips in the world” and OpenAI hoped to remain a “gigantic customer for a very long time.” Reuters

Several people cited by Reuters said OpenAI’s worry is how fast hardware can deliver answers for particular tasks, like software development and AI systems interfacing with other software. One source said OpenAI is aiming for new hardware that might eventually handle roughly 10% of its inference computing needs.

Reuters said the clearest signs show up in Codex, OpenAI’s coding product. Altman told reporters on a Jan. 30 call that customers using OpenAI’s coding models will “put a big premium on speed for coding work,” according to the Reuters account.

The engineering angle is memory. OpenAI’s search has homed in on chips that cram more memory onto the silicon, including SRAM — a fast type of on-chip memory — because that can cut time wasted shuttling data back and forth.

By contrast, mainstream graphics processing units, or GPUs, typically rely on external memory, which can add delay for workloads that pull lots of data from memory. Reuters noted inference can be especially memory-hungry because the chip spends more time fetching data than doing math.

OpenAI has looked at alternatives — AMD and smaller players like Cerebras and Groq, Reuters reported. The story also said Nvidia pushed back to protect its lead, even licensing Groq technology in a deal that, one person told Reuters, stopped OpenAI’s negotiations with Groq.

Nvidia shares fell roughly 4% in midday trading Tuesday, after investors parsed the tug-of-war around OpenAI and pondered which players benefit as AI moves toward faster, cheaper inference.

The Financial Times called OpenAI’s funding drive “too-big-to-fail,” saying the tech firms queuing to invest now have parts of their own fortunes linked to OpenAI’s prospects. Ft

But switching inference workloads isn’t just a parts swap. Software and AI stacks are tightly tuned to Nvidia’s ecosystem, and rival chip makers still need to show they can deliver at scale and maintain predictable performance when demand surges.

Huang is publicly doubling down, saying Nvidia will invest and that the timetable is intact. OpenAI, while shopping for options in parts of its inference stack, has also signaled it still expects to lean heavily on Nvidia for the bulk of the computing behind its products.

Stock Market Today

  • Welspun Living Shares Surge 19.73% Pre-Market on Q3 Earnings Beat
    February 3, 2026, 4:49 PM EST. Welspun Living (WELSPUNLIV.NS) stock soared 19.73% in pre-market trading to INR 146.61 on Feb 4 after releasing strong Q3 earnings, with earnings per share (EPS) at 3.71 and a volume surge to 32.58 million shares, about 7.4 times average daily volume. The company trades at a trailing price-to-earnings ratio of 39.52 and a price-to-book ratio of 2.53. Meyka AI assigned a B grade with a hold recommendation, citing solid margins but weaker operating cash flow growth. Technical indicators show mixed momentum with RSI at 45.98 and a negative MACD histogram. Analyst price targets range from INR 130 to INR 180, with Meyka AI projecting a 14.2% price upside over the next year. Risks include inventory cycles and slower profit growth amid a soft consumer cyclical sector.
Westlake stock price jumps nearly 8% in late trade as chemical shares rise
Previous Story

Westlake stock price jumps nearly 8% in late trade as chemical shares rise

RTX stock price rises as Singapore expansion plan puts focus back on repair capacity
Next Story

RTX stock price rises as Singapore expansion plan puts focus back on repair capacity

Go toTop