SANTA CLARA, Calif., April 2, 2026, 04:09 PDT
Nvidia put $2 billion into Marvell Technology and locked Marvell’s custom silicon into its NVLink Fusion setup—a platform designed to connect chips and networking gear in AI servers. The news broke Tuesday, pushing Marvell shares up roughly 7%. Nvidia added 2.7%. Marvell Technology, Inc.
This shift is happening as big AI customers start asking for chips that fit their own needs, not just Nvidia’s usual graphics offerings. Nvidia wants Marvell’s chips to play nicely with its own CPUs, network cards, and interconnects—essentially keeping a foothold in the data center rack, even when it’s not providing the primary processor. Reuters
The deal comes as Nvidia cedes territory in a crucial foreign market. According to IDC figures seen by Reuters, Chinese manufacturers snagged 41% of China’s AI server chip market for 2025, leaving Nvidia with 55%—down from its previous dominance. AMD’s slice is just 4%, while Huawei stands out among domestic competitors. Reuters
“The inference inflection has arrived,” Chief Executive Jensen Huang said, referring to the point when trained AI systems start producing real-world answers. Marvell CEO Matt Murphy, for his part, described the partnership as a way to give customers what they need to create “scalable, efficient AI infrastructure.” NVIDIA Newsroom
Jacob Bourne, eMarketer analyst, called the deal a chance for Nvidia to tap into Marvell’s “more specialized silicon,” potentially smoothing out “friction” when third-party chips operate in data centers dominated by Nvidia hardware. Both companies added they’ll collaborate on silicon photonics—a technology aimed at speeding up data transfers with lower power consumption. Reuters
Marvell’s role in the deal is to deliver custom XPUs — processors built for particular AI tasks — plus scale-up networking compatible with NVLink Fusion. Nvidia, for its part, is supplying other hardware: its Vera CPU, as well as network cards and switches. Both firms also plan to collaborate on AI-RAN, which refers to AI-driven radio access tech for 5G and 6G. NVIDIA Newsroom
Nvidia now has a fresh angle to capture AI dollars as spending pivots from model training to inference—the phase where models process questions and deliver results. Back in March, KinNgai Chan at Summit Insights Group flagged to Reuters that Nvidia “definitely going to see more competition” as custom silicon picks up steam, particularly in inference. Reuters
Nvidia’s effort comes as the company operates at a scale rarely seen in tech. Back in February, the chipmaker posted a record $215.9 billion in fiscal 2026 revenue. For the first quarter of fiscal 2027, Nvidia projected sales of around $78 billion—China data-center chip revenue not included in that outlook. NVIDIA Newsroom
Still, things could go sideways. Nvidia and Marvell have both warned that the partnership faces risks from possible regulatory pushback, changing demand and supply, lawsuits, and whatever the broader market throws their way. Marvell Technology, Inc.
Export controls still hang over the sector. On Thursday, Singapore prosecutors filed charges against a third individual linked to a fraud case involving servers that authorities suspect held Nvidia chips, highlighting just how tightly these shipments are being monitored. Reuters
Nvidia, for the moment, seems to be betting on pragmatism: customers might diversify their chip orders, but they’ll still rely on Nvidia’s software, networking, and systems expertise to integrate everything. Rivals could keep pushing in, yet Nvidia’s role in holding these setups together keeps it deeply embedded in the spending flow.