Taipei, April 6, 2026, 19:12 (UTC+8)
On Sunday, Nvidia received another bullish sign as Foxconn—its largest server supplier—posted a 29.7% year-over-year jump in first-quarter revenue, fueled by strong demand for artificial-intelligence (AI) gear. March revenue surged 45.6%, hitting a new monthly high. Foxconn expects momentum for AI server-rack systems to extend into the second quarter. Reuters
The update lands as investors continue pressing for proof that the AI sector’s mammoth infrastructure push is actually generating concrete orders, especially after recent market swings and fresh doubts about returns. By the end of March, Nvidia’s forward price-to-earnings multiple had slid to its lowest point since 2019. Analysts, though, were still looking for earnings growth north of 70% this fiscal year. Foxconn, meanwhile, flagged the “volatile” political and economic climate. Reuters
Nvidia turned in record fourth-quarter revenue at $68.1 billion, with data-center sales alone hitting $62.3 billion. For the full year, sales reached $215.9 billion. Looking ahead, the company is guiding for $78 billion in first-quarter revenue, specifically noting that this figure excludes any money from China data-center compute products—a reminder that that market remains tight. CEO Jensen Huang commented at the time, “Enterprise adoption of agents is skyrocketing.” NVIDIA Investor Relations
Nvidia’s supply chain isn’t sending mixed signals. Samsung is on track for a record profit in the first quarter, as AI-fueled memory prices go vertical. Analysts point to a boost in its contract chipmaking unit, helped by a new partnership with Nvidia for AI inference chips. During GTC in March, Huang revealed that Samsung had already started making Nvidia’s Groq LP30 inference processor using its 4-nanometer process, with shipments coming in the second half. And when it comes to memory, Daol Investment & Securities analyst Ko Yeongmin didn’t mince words: “You couldn’t ask for things to be better.” Reuters
Inference—the phase where an AI model responds to prompts instead of training on fresh data—has emerged as the latest battleground in the chip sector. That’s part of the rationale behind Nvidia’s $2 billion bet on Marvell last week, aimed at making it simpler for clients to integrate their own chips into Nvidia-centric data centers. eMarketer analyst Jacob Bourne told Reuters this should help cement Nvidia as a “key access point” for a wider spectrum of AI workloads. Reuters
There’s heat on multiple fronts. Broadcom expects AI-chip revenue to blow past $100 billion next year, driven by a surge in demand for custom silicon. Meta, for its part, has mapped out plans to develop its own training and inference chips, though it’s still snapping up significant orders from Nvidia and AMD. Reuters
China stands out as the main stress point here. According to The Information, as cited by Reuters on Friday, DeepSeek’s upcoming V4 model is set to use Huawei chips. Data reviewed by Reuters last week shows Chinese suppliers grabbed 41% of China’s AI accelerator server market in 2025, cutting Nvidia’s share to 55%. Huawei pulled ahead of local rivals including Alibaba’s T-Head, Baidu’s Kunlunxin, and Cambricon. Reuters
Still, things could shift. A bipartisan group in Congress is pushing for tougher export rules on chipmaking equipment headed to China—a step that would only widen the rift in the global semiconductor chain. Summit Insights’ KinNgai Chan points out Nvidia is “definitely going to see more competition” as inference work increasingly shifts to application-specific integrated circuits, chips that handle just one task at scale. Reuters
Right now, Foxconn’s results give a clearer sense of demand than the recent jitters swirling around Nvidia. The tougher issues—returns, rivals, and just how much custom chipmakers can siphon off—are still coming. Reuters