SANTA CLARA, Calif., April 2, 2026, 04:09 PDT
Nvidia dropped $2 billion on Marvell Technology, securing Marvell’s custom silicon for its NVLink Fusion platform—the backbone linking chips and networking hardware in AI servers. Shares of Marvell jumped about 7% after the announcement landed Tuesday, while Nvidia ticked 2.7% higher.
Demand from major AI players is moving toward chips tailored to their specific requirements, rather than sticking with Nvidia’s standard graphics chips. Nvidia’s approach now involves making sure Marvell’s chips integrate smoothly with its CPUs, network cards, and interconnects—essentially staying present in the data center rack, whether or not its processors are running the show.
Nvidia’s grip on a key overseas market is slipping. IDC data reviewed by Reuters shows Chinese players now hold 41% of China’s AI server chip market for 2025, cutting Nvidia’s share to 55%—a noticeable drop from its earlier stronghold. AMD trails far behind at 4%. Huawei, meanwhile, emerges as the largest local rival.
“The inference inflection has arrived,” Chief Executive Jensen Huang said, describing the moment when trained AI finally delivers practical results. Marvell CEO Matt Murphy called the partnership an opportunity to provide customers with the tools to build “scalable, efficient AI infrastructure.” NVIDIA Newsroom
eMarketer analyst Jacob Bourne described the deal as Nvidia’s opportunity to access Marvell’s “more specialized silicon,” which could help reduce the “friction” that crops up when third-party chips run in data centers built around Nvidia gear. The two companies also said they’re teaming up on silicon photonics, a tech designed to boost data transfer speeds while using less power. Reuters
Marvell is set to provide custom XPUs—processors tailored for specific AI workloads—along with scale-up networking that works with NVLink Fusion. Nvidia, meanwhile, brings its Vera CPU to the table, plus network cards and switches. The two companies are also teaming up on AI-RAN, shorthand for AI-powered radio access technology aimed at 5G and 6G.
Nvidia’s got a new play as AI budgets shift toward inference—the part of the workflow where models respond and spit out answers. In March, KinNgai Chan at Summit Insights Group told Reuters that Nvidia is “definitely going to see more competition” with custom silicon gaining traction, especially in inference. Reuters
Nvidia is running at a pace few tech firms can match these days. The chip giant locked in a record $215.9 billion in fiscal 2026 revenue back in February. Looking ahead, Nvidia expects about $78 billion in sales for the first quarter of fiscal 2027—a figure that leaves out China data-center chip revenue.
Even so, there are plenty of ways this could unravel. Nvidia and Marvell have flagged risks tied to regulatory hurdles, swings in demand or supply, litigation, or anything else the market might deliver.
Export controls remain a persistent cloud for the sector. Singaporean authorities on Thursday brought charges against a third person tied to a fraud case centered on servers suspected of containing Nvidia chips—an episode casting a spotlight on the intense scrutiny over such shipments.
For now, Nvidia appears to be leaning into pragmatism. Customers may spread their chip orders around, but they’re not dropping Nvidia’s software, networking, or systems know-how when it comes to tying everything together. Competitors may press in, but Nvidia’s spot as the glue in these deployments leaves it tightly woven into the spending stream.