Nvidia Just Hit $5 Trillion—But Can AMD or Intel Finally Crack Its AI Chip Moat?
The state of play: Nvidia’s grip on AI compute Nvidia’s AI accelerators built on Hopper (H100/H200) and now Blackwell (GB200/B200) remain the default choice for training and serving the largest AI models because they pair raw throughput with a full‑stack advantage—CUDA software, NVLink/NVSwitch interconnects, and networking that plugs into hyperscale data centers. Bloomberg’s explainer today makes the point starkly: investors…