- Huawei unveils ambitious AI chips & supercomputers: The Chinese tech giant ended years of secrecy to detail a roadmap for new Ascend AI chips (models 950 in 2026, 960 in 2027, 970 in 2028) and massive computing clusters, claiming it will double computing power each year reuters.com reuters.com. Huawei’s rotating chairman Eric Xu said the company will follow a “1-year release cycle and double compute with each release.” reuters.com
- In-house high-bandwidth memory (HBM): Huawei announced it has developed its own proprietary HBM chips, branded HiBL and HiZQ, breaking reliance on foreign memory suppliers SK Hynix and Samsung reuters.com reuters.com. The upcoming Ascend 950 chips will integrate this HBM, with Huawei claiming cost and performance comparable to leading HBM3E/4E memory (up to 128–144 GB capacity and 1.6–4.0 TB/s bandwidth) reuters.com reuters.com.
- “Supernode” AI clusters to rival Nvidia: Huawei is launching new AI supercomputing systems (dubbed Atlas SuperPods) that network thousands of Ascend chips at high speed. The Atlas 950 SuperPod (Q4 2026) will interconnect 8,192 Ascend 950 chips in 160 server cabinets, and Huawei boasts it will “far exceed its counterparts across all major metrics,” even outperforming Nvidia’s planned 2026 AI system reuters.com reuters.com. A larger Atlas 960 (2027) with up to 15,488 chips is also in the works reuters.com. Huawei says dozens of these SuperPods can link together into the world’s most powerful “SuperClusters” for AI abcnews.go.com abcnews.go.com.
- Fresh challenge to Nvidia amid chip bans: The timing is dramatic – Huawei’s reveal comes just as China’s regulators reportedly ordered tech firms to halt purchases of Nvidia AI chips and even launched a probe accusing Nvidia of violating anti-monopoly laws reuters.com reuters.com. Beijing’s cyberspace administration directed companies (like Alibaba and ByteDance) to cancel orders of Nvidia’s latest AI GPU, in what analysts call a “stronger than earlier” ban on Nvidia hardware reuters.com reuters.com. Nvidia’s CEO Jensen Huang, caught in the crossfire, noted both Washington and Beijing “have larger agendas to work out” in this AI chip standoff reuters.com reuters.com.
- Technical self-sufficiency despite sanctions: Experts say Huawei’s move underscores China’s drive for tech self-reliance. “This is a significant milestone… a stronger push toward self-reliance and resilience in the face of export restrictions,” said Charlie Dai of Forrester Research abcnews.go.com. Huawei had been crippled by U.S. sanctions since 2019 – including export controls barring TSMC from making its chips – but now indicates that domestic chip manufacturing is “no longer such a big constraint” reuters.com. Analyst Tilly Zhang noted growing confidence that U.S. export controls “are not really threatening” Huawei’s progress anymore reuters.com.
- Nvidia still ahead (for now): Despite advances by Huawei and other Chinese firms, Nvidia’s AI processors still outperform current Chinese chips in many tasks reuters.com. Chinese tech companies have not fully abandoned Nvidia – both Alibaba and Baidu continue using Nvidia GPUs for their most cutting-edge AI models, even as they experiment with in-house chips reuters.com reuters.com. Nvidia dominates roughly 80% of the accelerator market and China accounted for ~13% of its sales last year reuters.com. The company’s GPUs (like the A100/H100 series) remain the gold standard for AI training, thanks to both hardware and a rich software ecosystem.
- Rivals mobilize: AMD, Intel, Alibaba, Baidu: Huawei’s push comes amid a broader AI chip race. U.S. competitor AMD is rolling out its MI300/MI350 series accelerators and just unveiled a new “Helios” AI super-server for 2026 to challenge Nvidia’s high-end systems reuters.com reuters.com. Intel is ramping its Gaudi AI chips – its latest Gaudi3 accelerators are debuting in Dell servers, promising comparable performance at lower power than Nvidia GPUs crn.com crn.com. In China, cloud giants Alibaba and Baidu have designed their own AI chips (like Alibaba’s new inference chip and Baidu’s Kunlun chips) and recently began using them to train AI models, partly replacing Nvidia hardware reuters.com reuters.com. Beijing is heavily backing such efforts, pressuring firms to adopt homegrown tech and pouring billions into China’s semiconductor sector reuters.com reuters.com.
- Geopolitics: tech war escalation: Huawei’s announcement was “carefully timed” ahead of a meeting between Xi Jinping and Donald Trump reuters.com, and it highlights the high-stakes U.S.-China tech rivalry. U.S. sanctions aimed to choke Huawei’s access to advanced chips (e.g. blocking EUV lithography machines), yet China’s engineers demonstrated a 7nm chip breakthrough in Huawei’s new smartphone reuters.com reuters.com – a feat achieved “without EUV tools,” signaling China’s resilience reuters.com. American officials are expected to scrutinize these developments; analysts warn Huawei’s chip gains could trigger even tighter export curbs from Washington as the “tech Cold War” intensifies reuters.com reuters.com. As one industry expert put it, companies like Nvidia are now “pawns in a digital Cold War” between two superpowers reuters.com.
Huawei’s New AI Chip Offensive: Technical Breakthroughs
Huawei’s latest moves mark its boldest bid yet to break free from foreign semiconductors and establish itself as a leader in artificial intelligence computing. At its annual Huawei Connect conference in Shanghai, the company outlined a sweeping long-term chip strategy, openly detailing plans that had been shrouded in secrecy since U.S. sanctions hit in 2019 reuters.com reuters.com. The headline announcement: Huawei will rapidly iterate its Ascend AI chips and accompanying systems, doubling computing power every year in an aggressive cadence aimed at catching up to (and even surpassing) global rivals reuters.com.
Central to this strategy is Huawei’s Ascend AI processor lineup – its answer to Nvidia’s GPU chips reuters.com. The company revealed a roadmap for three new Ascend generations in the next three years: the Ascend 950 (set for Q1 2026), Ascend 960 (2027), and Ascend 970 (2028) reuters.com reuters.com. The Ascend 950 will even come in two specialized variants (950 PR and DT) tuned for different AI tasks like recommendation inference versus training and decoding reuters.com reuters.com. Each new chip generation is expected to deliver a two-fold jump in performance and memory capacity over the previous, a leapfrog pace that Huawei says will keep it on the cutting edge reuters.com.
Notably, Huawei claims it has solved a key piece of the hardware puzzle in-house: high-bandwidth memory (HBM). This ultra-fast memory, essential for AI and high-performance computing, has until now been dominated by Samsung and SK Hynix. Eric Xu announced that Huawei now has its own proprietary HBM technology reuters.com. Dubbed HiBL 1.0 and HiZQ 2.0, Huawei’s HBM chips will be integrated into its AI processors – for example, the Ascend 950PR will feature HiBL 1.0 providing 128 GB of memory at 1.6 TB/s bandwidth reuters.com, while the higher-end 950DT version uses HiZQ 2.0 with 144 GB at a blistering 4 TB/s reuters.com. If these specs hold true, Huawei’s memory would be on par with or even exceed the latest HBM3E standard in capacity – a significant step toward self-sufficiency in a critical component of AI hardware reuters.com. By developing its own HBM, Huawei reduces reliance on imported memory chips and can tightly optimize the memory–processor integration for its systems.
Equally audacious is Huawei’s plan to deploy its chips in massive “supernode” computing clusters that could vault China into the supercomputing elite. Rather than matching rivals chip-for-chip, Huawei is leveraging its strengths in system engineering – packaging many chips together with advanced networking – to create AI supercomputers of unprecedented scale. At the conference, Huawei unveiled designs for its next-generation AI computing clusters, called Atlas SuperPods, which function as building blocks of even larger AI SuperClusters abcnews.go.com abcnews.go.com. Each SuperPod is essentially a supercomputer rack (or set of racks) filled with Ascend AI processors, all linked with high-speed interconnects (an area where Huawei’s telecom hardware expertise gives it an edge).
The upcoming Atlas 900 series SuperPods push this concept to extremes. Huawei’s current flagship cluster, the CloudMatrix 384 (Atlas 900 A3), uses 384 Ascend 910C chips and already “rivals some of Nvidia’s most advanced offerings,” according to industry experts reuters.com reuters.com. Building on that, Huawei announced the Atlas 950 SuperPod for late 2026, a behemoth that will pack 8,192 Ascend 950 chips interconnected across 160 cabinets reuters.com. For context, that is more than 100× the number of chips in Nvidia’s current high-end DGX systems (which use 72-144 GPUs) and will occupy about 1,000 m² of floor space reuters.com. Huawei is making big claims about this system: Xu said the Atlas 950 will offer 6.7× the computing power and 15× the memory capacity of Nvidia’s top-of-the-line system planned for 2026 (the NVL144 server) reuters.com. In other words, Huawei believes its 2026 supercomputer can decisively outgun Nvidia’s upcoming DGX-class system. Moreover, Xu asserted that Atlas 950 will “continue to beat” whatever successor system Nvidia is planning in 2027 reuters.com.
Huawei isn’t stopping there. A year later, it aims to roll out an even mightier Atlas 960 SuperPod in 2027, featuring up to 15,488 Ascend 960 chips across 220 cabinets reuters.com reuters.com. If achieved, that would likely be one of the largest AI-dedicated supercomputers ever built. Such scale is designed to compensate for any one chip’s shortcomings by harnessing thousands in parallel – a strategy very much in line with China’s philosophy of solving tech limitations through sheer scale and engineering integration.
This emphasis on cluster architecture is a deliberate workaround to China’s current handicap in semiconductor fabrication. Huawei’s AI chips are believed to be manufactured by local foundries (the company did not disclose which, but analysts point to SMIC, given Huawei is banned from TSMC’s cutting-edge fabs) reuters.com. These domestic foundries still lag a generation or two behind industry-leading processes. In practice, that means a single Huawei Ascend chip might not match the raw performance of Nvidia’s latest H100 or upcoming Blackwell GPUs built on the most advanced 5nm/3nm nodes. Huawei’s solution: use many chips and superior system design to close the gap. “Huawei is leveraging its strengths in networking, along with China’s advantages in power supply, to aggressively push supernodes and offset lagging chip manufacturing,” observed Wang Shen, an Omdia tech analyst reuters.com. By interconnecting thousands of Ascends with fast links (Huawei calls this a “supernode” architecture), the company can achieve aggregate performance that rivals or exceeds a smaller cluster of more powerful chips.
This approach was validated earlier in the year when Huawei quietly debuted the CloudMatrix 384 system. SemiAnalysis, a semiconductor research group, noted that Huawei’s CloudMatrix (384 Ascend 910C chips) “outperforms Nvidia’s [equivalent] NVL72 system on some metrics” reuters.com reuters.com. The performance stems from Huawei’s system design capabilities, which compensate for weaker individual chip performance by using more chips and system-level innovations reuters.com. In May, even Nvidia’s CEO Jensen Huang acknowledged Huawei’s progress, saying the Chinese firm had been “moving quite fast” and citing the CloudMatrix as an example reuters.com. In effect, Huawei is attempting to brute-force its way into AI leadership by combining second-best components in first-class ways. It’s a bit like building a supercar not with the single most powerful engine, but with a fleet of smaller engines working in unison.
Beyond AI training chips, Huawei is also updating its Kunpeng line of server CPUs (general-purpose processors akin to Intel Xeon or AMD EPYC, based on ARM architecture). Xu announced new Kunpeng 950 and 960 CPUs slated for 2026 and 2028, along with a TaiShan 950 cluster system for mainstream computing reuters.com. These indicate Huawei’s broader strategy to tackle data center computing holistically – not just the AI accelerators but also the CPU platform – using domestic designs. While details on Kunpeng were sparse, this aligns with China’s push to replace Western CPU chips in servers (like Intel’s) with homegrown alternatives.
Taken together, Huawei’s announcements demonstrate a full-stack computing ambition: proprietary chips, memory, networking, and the integration know-how to build giant systems. This marks a dramatic comeback for a company that, after U.S. sanctions in 2019, had gone almost silent on its chip development. By publicly rolling out a three-year chip roadmap and boasting about world-class supercomputers, Huawei is effectively declaring that it has re-emerged as a serious contender in high-end semiconductors reuters.com reuters.com. As Xu put it on stage, “We will follow a one-year cycle and double compute with each release,” signaling Huawei’s intent to iterate at Silicon Valley speed despite the sanction-induced detour reuters.com.
Nvidia’s Dominance – and Huawei’s Direct Challenge
Huawei’s bold foray comes squarely at the market leader, Nvidia, which for years has enjoyed unassailable dominance in AI chips and computing infrastructure. Nvidia’s GPUs are the engine behind everything from ChatGPT to autonomous driving research – so much so that in 2023–2024 a global surge in AI demand led to Nvidia reporting record revenues and becoming one of the world’s most valuable companies. It is no exaggeration that Nvidia currently sets the pace in AI hardware, with over 80% market share in data center AI accelerators (the exact figure varies by estimate). Its flagship H100 GPUs (5nm-based, with advanced tensor cores and enormous memory bandwidth) are widely regarded as the most powerful chips for training large AI models. Moreover, Nvidia offers a full-stack solution – not just chips, but also the CUDA software platform, AI libraries, and data center systems (DGX servers) – which has fostered a rich ecosystem around its products.
For Chinese companies, Nvidia’s hardware has been the gold standard. “Companies largely rely on Nvidia’s powerful processors for AI development,” noted Reuters, describing China’s AI landscape reuters.com. Even Huawei itself was a big Nvidia customer in the past for its cloud services. However, geopolitical currents have begun to cut off China’s access to Nvidia’s best technology, creating both urgency and opportunity for alternatives. The U.S. government’s export controls (since late 2022) barred Nvidia from selling its top-tier A100 and H100 chips to China; Nvidia responded by making slightly neutered versions (A800, H800, and reportedly an “H20” variant) for the Chinese market reuters.com reuters.com. But these “downgraded” chips still faced potential restrictions – indeed, the H20, Nvidia’s most advanced chip it was allowed to sell in China, was effectively blocked by a U.S. decision earlier this year reuters.com. (In a twist, the U.S. administration temporarily allowed H20 sales to resume last month, but only after extracting a concession – more on that in the geopolitics section below – and China has since moved to forbid even those sales reuters.com reuters.com.)
The result is that Nvidia’s grip on the Chinese AI market is suddenly looking tenuous, precisely as Huawei and others unveil domestic solutions. Nvidia’s chips still have a performance edge – engineers at Chinese firms privately admit Nvidia GPUs “perform better” than local alternatives for many tasks reuters.com. In fact, neither Alibaba nor Baidu is “fully abandoning” Nvidia yet; they continue to use Nvidia silicon for their most demanding AI models while using their own chips for less critical workloads reuters.com reuters.com. And despite Huawei’s confidence, it will take time for its Ascend chips to prove they can match the efficiency and reliability of Nvidia’s offerings, especially in real-world, at-scale deployments. Nvidia also benefits from its well-established software ecosystem (CUDA, cuDNN, TensorRT, etc.), which developers have optimized for – any new chip must offer not just hardware but also compatible software frameworks to be a true replacement.
That said, Huawei is positioning its new hardware directly against Nvidia’s top products. The timing and messaging of Huawei’s announcements made the target clear. Xu explicitly contrasted the upcoming Atlas SuperPods with Nvidia’s planned systems, claiming significant performance advantages reuters.com. Huawei even cited specific Nvidia systems by name (e.g. Nvidia’s NVL144 system for 2026) in its comparison – a rarity for a company to so openly challenge a competitor’s roadmap. Furthermore, the context of China reportedly banning Nvidia chip imports the very same week gives Huawei’s pitch extra punch: if Chinese firms cannot buy Nvidia’s GPUs, Huawei is eager to present itself as the viable made-in-China alternative.
On the technical front, Huawei’s Ascend 910/950 chips are the closest analog to Nvidia’s A100/H100 GPUs – they’re designed for AI training and inference, boast matrix computation features, and support a scalable interconnect. Early Ascend models (like the 910 launched in 2019) showed promise on paper but didn’t seriously dent Nvidia’s dominance, partly due to software and ecosystem lag. However, the latest Ascend 910C (launched Q1 2025) and the forthcoming 950 series could narrow the gap reuters.com. Huawei’s integration of proprietary HBM into these chips is one attempt to leapfrog: memory bandwidth is crucial for AI workloads, and having 4 TB/s on-chip bandwidth (as claimed for Ascend 950DT) is cutting-edge reuters.com. By controlling the memory stack, Huawei can optimize data flow in ways Nvidia does with its co-packaged HBM on H100.
Another area Huawei is exploiting is networking and scalability. Nvidia’s solution to scale beyond one machine has been NVLink and InfiniBand networking, often within its DGX and HGX server platforms. Huawei, being a networking powerhouse, has developed its own high-speed interconnect for the Ascend clusters (sometimes called CubeMesh or similar in earlier literature, and referred to as “supernode architecture” now reuters.com). This is similar in spirit to Nvidia’s NVLink Switch and NVSwitch technologies that allow many GPUs to act as one. However, Nvidia traditionally sold individual GPUs or small DGX nodes; now the trend (which Nvidia itself is also following) is to sell entire AI supercomputers.
In fact, the competition is shifting from chips to integrated AI systems. As an AMD executive recently pointed out, the market is moving toward selling “servers packed with scores or hundreds of processors, woven together with networking chips from the same company” rather than just standalone chips reuters.com. Nvidia recognized this and has been offering products like the DGX SuperPOD (clusters of DGX nodes) and the upcoming Nvidia NVL series (like NVL72, NVL144, which are essentially pre-configured multi-GPU servers). Huawei’s Atlas 900/950 series is essentially China’s answer to Nvidia’s DGX/NVL systems – a fully Huawei-controlled stack of hardware that it can sell to data centers or cloud providers as turnkey AI infrastructure.
By touting metrics where its systems excel (total compute, total memory), Huawei is exploiting the one advantage it can claim over a sanction-constrained Nvidia: scale without limits. Nvidia, bound by U.S. rules, can only sell “watered-down” chips to China, and even those might get curtailed. Huawei, freed from those restrictions on its domestic market, can throw as many of its own chips as it wants into a solution. The Atlas 950’s 8,192-chip design, for instance, is far larger than anything Nvidia can currently ship to China (Nvidia’s China-only RTX 6000/8000 series GPUs are single cards that Chinese firms have been trying to use in multiples, but authorities are now clamping down on even those) reuters.com reuters.com. Indeed, Chinese regulators’ latest ban extends to Nvidia’s more budget-oriented AI cards like the RTX 6000/6000D, which some companies were testing as a workaround reuters.com. Huawei is essentially saying: we’ll fill the void with domestic supercomputers built on domestic chips.
It’s worth noting that Nvidia’s supremacy is not just about hardware muscle; it’s also about software and developer support. Huawei has been developing its CANN and MindSpore AI frameworks as counterparts to Nvidia’s CUDA and TensorFlow/PyTorch ecosystems. While those have seen adoption in China, globally they trail far behind. In the short term, Huawei’s gear will likely serve Chinese companies and researchers who are encouraged (or compelled) to use domestic tech. Internationally, Nvidia’s ecosystem lock-in means Huawei faces an uphill battle to win over AI developers – unless geopolitical forces persuade other countries or companies to diversify away from U.S. tech. We are beginning to see some interest in non-Nvidia solutions for cost or strategic reasons (for instance, OpenAI’s CEO Sam Altman recently appeared on stage with AMD’s CEO, signaling an openness to alternative chips reuters.com reuters.com). But for now, Nvidia’s lead in software, developer mindshare, and sheer market penetration remains formidable.
In summary, Nvidia still holds the crown in AI compute, but Huawei is mounting a serious challenge on multiple fronts: hardware design, memory technology, and system-level integration. The real test will be whether Huawei’s forthcoming Ascend 950/960 can deliver the promised performance in practice and whether Chinese companies embrace these instead of clamoring for Nvidia’s silicon. Given the political winds, Huawei has a strong tailwind in its home market. But globally, Nvidia’s dominance of AI infrastructure – from chips all the way to cloud services (Nvidia GPUs power major cloud providers and supercomputers worldwide) – will not be overturned overnight. We are, however, witnessing the first credible attempt by a Chinese company to stake a claim in the high-end AI computing arena that Nvidia currently rules.
AMD and Intel: U.S. Competitors Bolster Their Arsenal
Huawei isn’t the only one eyeing Nvidia’s AI throne. In the United States, Nvidia’s chief rivals, AMD and Intel, are ramping up their own strategies to capture a bigger slice of the red-hot AI accelerator market. While Huawei’s push is geopolitically charged, AMD and Intel are motivated by commercial and technological competition – yet all share a common target in the overwhelmingly dominant Nvidia.
AMD (Advanced Micro Devices), long the underdog to Nvidia in GPUs, has significantly stepped up its AI game. In June 2025, CEO Lisa Su unveiled AMD’s latest AI roadmap at an event aptly titled “Advancing AI.” AMD introduced its next-generation MI300-series and upcoming MI400-series accelerators, and – notably – it is no longer just selling chips, but complete AI servers. Su showcased a new “Helios” AI server slated for 2026, which will integrate 72 of AMD’s MI400 GPUs in one system reuters.com reuters.com. This parallels Nvidia’s move of selling systems like the NVL72; in fact, AMD said Helios with 72 GPUs will be “comparable to Nvidia’s current NVL72 servers.” reuters.com reuters.com It’s a clear bid to go head-to-head on Nvidia’s turf of turnkey AI infrastructure.
AMD also scored a symbolic win: OpenAI’s CEO Sam Altman appeared on stage with Lisa Su, revealing that OpenAI is working with AMD to optimize the design of AMD’s next-gen MI450 chips for AI workloads reuters.com reuters.com. OpenAI – creator of ChatGPT – is one of Nvidia’s largest customers (its early GPT models were trained on thousands of Nvidia GPUs). So Altman’s presence sent a message that even the industry’s AI pioneers want a multi-vendor strategy. “Our infrastructure ramp-up over the last year… has been crazy to watch,” Altman said, indicating OpenAI’s voracious demand for hardware and willingness to “adopt AMD’s latest chips.”* reuters.com reuters.com This collaboration suggests AMD’s MI450 might be used in OpenAI’s future clusters, which would be a huge validation of AMD’s technology.
Technically, AMD’s approach has some parallels with Huawei’s and some stark differences. Like Huawei, AMD is emphasizing openness and integration. Lisa Su took a subtle swipe at Nvidia’s closed ecosystem, stating “The future of AI is not going to be built by any one company or in a closed ecosystem… it’s going to be shaped by open collaboration across the industry.” reuters.com reuters.com AMD announced that elements of its new server design (like certain networking standards) would be made openly available and even shared with competitors like Intel reuters.com. This is in contrast to Nvidia’s proprietary NVLink interconnect, which historically locked customers into Nvidia-only systems (though Nvidia has started to license NVLink more recently under pressure) reuters.com reuters.com. AMD’s bet is that an open ecosystem – where, for example, its GPUs can more easily work with other companies’ CPUs or interconnects – will attract cloud providers and enterprise buyers who dislike vendor lock-in.
AMD has also been on an acquisition and talent spree to bolster its AI capabilities. It acquired a leading server maker (Pensando and more recently ZT Systems in early 2025) to help it deliver complete systems like Nvidia does reuters.com. It’s snapped up AI startups (like the team from Untether AI and some from Lamini) and invested in software to improve its ROCm platform – an open alternative to Nvidia’s CUDA reuters.com reuters.com. All these moves are geared towards making AMD a credible second source for AI accelerators. Analysts note AMD still trails Nvidia in market share and software support, and even AMD’s big announcements didn’t immediately wow investors (AMD’s stock dipped after unveiling its MI350/MI400 lineup, with one analyst saying the new chips were “not likely to immediately change AMD’s competitive position.” reuters.com reuters.com). Nevertheless, the growing partnership roster – beyond OpenAI, companies like Meta, Oracle, and even Elon Musk’s xAI have expressed interest in or are testing AMD’s MI-series GPUs reuters.com reuters.com – shows AMD is increasingly seen as the only other game in town for high-end AI chips, especially as Nvidia’s supply is limited and expensive. In fact, a cloud provider called Crusoe recently told Reuters it plans to buy $400 million worth of AMD’s new chips, citing competitive performance per dollar reuters.com. Such deals underscore how AMD is positioning itself as a cost-effective alternative to Nvidia for large-scale AI infrastructure.
Intel, meanwhile, is coming from a different angle. Traditionally dominant in CPUs, Intel missed the boat on GPUs for AI, but it acquired Israeli startup Habana Labs in 2019 and has been developing the Gaudi line of AI accelerators since. The latest Gaudi3 chips are Intel’s answer to Nvidia’s A100/H100 for training neural networks. While Intel’s overall AI market share is small, Gaudi has seen some adoption: by early 2025, Dell Technologies announced it is the first to bring Intel’s Gaudi 3 accelerators to market in its servers crn.com. Dell’s PowerEdge servers can house up to eight Gaudi3 PCIe cards, and Dell touted that Gaudi’s advantages include costing less and using about half the power of leading GPUs for certain workloads crn.com. (Gaudi3 cards have a TDP of ~600W each, versus ~1200W for Nvidia’s highest-end Blackwell GPU boards, according to Dell’s comparisons crn.com. If accurate, that’s a notable power efficiency edge, though performance per watt is the real metric.)
Intel is leveraging major server OEMs and cloud partnerships to chip away at Nvidia’s lead. Besides Dell, other vendors like HPE, Supermicro, and IBM have shown interest in Gaudi3 for AI training clusters wccftech.com. In one estimate, Intel’s Gaudi3 platform could secure around 8–9% of the AI training accelerator market by end of 2025 if its rollout succeeds sqmagazine.co.uk. Intel’s selling point is also open standards – Gaudi uses Ethernet networking (rather than Nvidia’s InfiniBand) and is designed for scaling with standard components intel.com. The Gaudi3 systems support frameworks like PyTorch out-of-the-box and target ease of integration into existing data centers (one Dell exec noted Gaudi offers “silicon diversity” and easier capital expense management for enterprises experimenting with AI) crn.com crn.com.
Intel has an eye on the future too – it has talked about a next-gen product called Falcon Shores which aims to combine GPU and CPU cores on one chip for AI, though that’s a 2025–2026 plan that may have evolved. More concretely, Intel’s strategy relies on partnering through the channel. Intel executives said having multiple OEM partners for Gaudi (now Dell, HPE, etc., versus only one in the Gaudi2 generation) is “massive progress” and that they count on channel partners to bring Gaudi “to the masses.” crn.com In other words, by riding the distribution networks of big server sellers (which already sell a ton of Intel CPUs), Intel hopes Gaudi can piggyback its way into data centers, especially for customers who either can’t get enough Nvidia GPUs or seek cost-effective alternatives for some tasks (like AI inference or smaller model training).
It’s also worth mentioning other players: for completeness, companies like Google and Amazon have developed their own AI chips (Google’s TPU for training, Amazon’s Trainium and Inferentia for AWS cloud) to reduce dependence on Nvidia. These are mostly used in-house or for cloud clients, but they form part of the competitive landscape. Microsoft is reportedly designing an AI chip for its own needs, and startups abound in this space (Graphcore, Cerebras, etc.), though none have yet dented Nvidia’s momentum significantly. The user focus here is on AMD, Intel, Alibaba, Baidu – but it’s clear that across the board, a multitude of companies are racing to develop AI accelerators, sensing both a lucrative market and, in some cases, national strategic value.
In summary, AMD and Intel are attacking Nvidia from different directions: AMD with a high-end, partnership-driven approach (joining forces with marquee AI labs and matching Nvidia’s yearly release cadence), and Intel with a more power-efficient, integration-friendly approach via data center incumbents. Both still have a lot to prove – Nvidia’s lead in real-world AI workloads and software stack means it remains the default choice for most AI developers. But with explosive demand for AI compute, even a fraction of the market can translate to big business. For instance, AMD’s MI300X GPU, optimized for large language model inference, has been pitched as a memory-rich alternative to Nvidia for deploying models like GPT-4 – and some cloud providers are testing it to save costs. If Huawei is the threat emerging from the East, AMD and Intel represent the competitive pressure mounting in the West on Nvidia. All of this is ultimately good for innovation: after a period of near-monopoly, the AI chip arena is becoming a multi-horse race.
China’s Tech Titans Embrace Homegrown Chips: Alibaba and Baidu
Within China, Huawei is spearheading the hardware charge, but it’s not alone. Other Chinese tech giants – notably Alibaba Group and Baidu – have also been developing their own AI chips, driven by the same twin forces of U.S. export bans and China’s push for indigenous innovation. Recent reports show that these companies are already starting to deploy their in-house chips in AI operations, marking a significant shift in China’s AI ecosystem.
Alibaba, the e-commerce and cloud computing powerhouse, has a dedicated chip design unit (T-Head) which produced the Hanguang 800 inference chip in 2019 and an ARM server CPU (Yitian 710) in 2021. Now Alibaba is working on more advanced AI silicon. In late August 2025, it was reported that Alibaba has developed a new AI chip aimed at AI inference (the type of chip used to run trained AI models efficiently) reuters.com. This chip is said to be more versatile than the company’s previous efforts, targeting a broader range of AI tasks. Importantly, unlike Alibaba’s earlier 5nm AI chip (which was fabricated by TSMC before sanctions), the new chip is manufactured by a Chinese foundry reuters.com reuters.com – a necessity given U.S. curbs. This suggests Alibaba might be using SMIC’s 7nm process or another local solution to make its chips, similar to Huawei’s approach. Alibaba declined comment on those reports, but the writing is on the wall: China’s biggest cloud provider is investing to fill the “Nvidia void” with its own silicon reuters.com.
Indeed, Alibaba is one of Nvidia’s largest customers in Asia – its cloud division has been buying high-end Nvidia GPUs to offer AI services. That’s why the Chinese government’s recent directives hit particularly close to home for Alibaba. Over the summer, Beijing not only discouraged, but according to the Financial Times, explicitly ordered top tech firms (including Alibaba) to stop purchasing Nvidia’s AI chips and cancel existing orders reuters.com. This came after the U.S. had blocked Nvidia’s H100, and even the sanctioned H20 chip faced uncertainty. Alibaba reportedly had been pressured by regulators to avoid the H20 GPU and find alternatives reuters.com. In this climate, Alibaba’s new AI chip project appears partially motivated by necessity: if you can’t reliably buy from Nvidia, you’d better build your own.
And they have started to use them. According to a September 2025 report by The Information, Alibaba has been quietly using its internally designed AI chips to train “smaller AI models” since early 2025, thereby partly replacing Nvidia chips in those tasks reuters.com reuters.com. This is a remarkable development – it means at least for some jobs (perhaps less complex models or inference tasks), Alibaba’s engineers trust their own silicon enough to put it into production use. Meanwhile, Baidu, China’s leading search engine and AI company, is doing the same. Baidu’s chip unit developed the Kunlun AI accelerators (the latest is Kunlun P800), and Baidu is now “experimenting with training new versions of its ERNIE AI model using its Kunlun P800 chip,” the report said reuters.com reuters.com. ERNIE is Baidu’s flagship large language model (akin to a Chinese ChatGPT), so Baidu testing it on Kunlun chips is a strong sign of confidence in its own hardware.
These moves by Alibaba and Baidu underscore a significant strategic shift. As the Reuters report put it, “The move is a significant shift in China’s tech and AI landscape — where companies largely rely on Nvidia’s processors — and would further dent Nvidia’s China business.” reuters.com. An Nvidia spokesperson responded to that report saying, “The competition has undeniably arrived… We’ll continue to work to earn the trust and support of mainstream developers everywhere,” acknowledging that domestic competition is emerging in China reuters.com reuters.com. Nvidia knows it cannot take for granted that Chinese tech giants will keep buying its chips in the face of political pressure and improving local options.
It’s important to note that Alibaba and Baidu have not totally weaned themselves off Nvidia – far from it. The same report noted that neither company has “fully abandoned” Nvidia: they still use Nvidia GPUs for “their most cutting-edge models.” reuters.com In practice, that means the largest, most complex AI training jobs (say, training a GPT-4 scale model or a very high-accuracy vision model) likely still run on Nvidia hardware, because Chinese chips aren’t yet as powerful or as mature in software tooling. Also, any switch from Nvidia to another chip involves porting software and fine-tuning, which takes time. So this is an incremental shift – but a crucial one. It starts with smaller models and inference (where perhaps Alibaba’s chip is “good enough to compete with Nvidia’s H20,” as The Information heard from employees who tested it reuters.com) and will progress to bigger challenges as the domestic tech improves.
The performance gap remains a concern. By Nvidia’s own admissions, even the H20 (the cut-down chip for China) “still outpaces Chinese alternatives in performance.” reuters.com reuters.com And H20 itself is weaker than Nvidia’s global flagship H100. In other words, Alibaba’s latest chip might roughly match a last-gen Nvidia A100 or the constrained H20, but not the bleeding edge H100/H200 class. This implies China is still perhaps a couple of years behind in raw chip technology. However, Chinese firms can compensate with volume and specialization, as Huawei’s cluster strategy shows. Plus, for inference tasks (where cost per query and power efficiency are key), a well-designed local chip can be quite competitive even if it’s a bit less powerful, especially if it’s cheaper and more available. We might soon see Alibaba’s cloud offering AI inference instances powered by Hanguang or whatever this new chip is called, for Chinese enterprise customers who prefer a homegrown solution – or who have no choice due to government policy.
Another interesting note: Alibaba’s cloud division recently posted strong growth thanks to AI services. In its April–June quarter, Alibaba’s cloud revenue jumped 26%, beating estimates, “on the back of solid demand” for AI-driven services reuters.com. This indicates Chinese companies are investing heavily in AI, which means a lot of hardware. If Nvidia chips are restricted, that demand will flow to alternatives like Huawei’s and Alibaba’s chips – a dynamic reflected in stock markets when Chinese semiconductor stocks surged 3-4% after news of the Nvidia chip purchase halt reuters.com reuters.com. The market sees that as opening a captive market for local suppliers.
Beyond Alibaba and Baidu, other Chinese players are also active. Tencent has reportedly worked on AI chips (though less public), and ByteDance (TikTok’s owner) has an AI chip initiative too. In fact, the ban on Nvidia was said to include ByteDance as well, since ByteDance was testing Nvidia’s cards for AI – which likely pushes it toward domestic chips or to companies like Huawei for cloud resources reuters.com reuters.com. The Chinese government has made it clear: they want Chinese tech firms to “turn away from American suppliers” and use “home-grown technology.” reuters.com reuters.com There’s a carrot-and-stick approach: stick in the form of purchase bans and regulatory pressure, carrot in the form of massive government funding. Beijing is launching a new state-backed semiconductor fund of $40 billion to boost chip R&D and manufacturing reuters.com reuters.com, and projects like national data center networks and cloud infrastructure upgrades are being tied to using domestic silicon reuters.com. Even local governments and state-owned firms are being encouraged to adopt Chinese chips in their AI projects.
A tangible example of China’s strategy is a major data center project spotlighted in news recently: a computing hub in Ningxia Province using only domestic chips (including Huawei’s Ascend and Alibaba’s Yitian CPUs) to power cloud services reuters.com. This is portrayed as a proof-of-concept that China can build large-scale computing infrastructure without U.S. technology.
Of course, there are challenges. Chinese chips need to catch up on software support – many AI researchers are trained on Nvidia’s CUDA stack, and switching to a new framework (like Huawei’s MindSpore or Baidu’s PaddlePaddle optimized for Kunlun) can be a learning curve. Also, producing these chips at scale is non-trivial. Huawei’s recent smartphone chip (Kirin 9000s) surprise in the Mate 60 Pro was a triumph, but reports say its foundry (SMIC) likely has low yields (maybe <50%) on 7nm, meaning costs are high and volumes limited reuters.com reuters.com. If Huawei plans to ship hundreds of thousands of Ascend AI chips, that will test domestic manufacturing capacity. (A report by Mizuho projected Huawei aims to ship about 700,000 Ascend processors in 2025 scmp.com – a big number that will require either very efficient local production or creative workarounds.)
Nevertheless, the momentum in China is unmistakable: companies are not waiting around. They are pouring resources into developing and using Chinese semiconductors. A senior associate at a Beijing-based tech fund recently quipped that for Chinese tech CEOs, 2023 was a “Sputnik moment” – watching U.S. export controls tighten was like seeing the first satellite launch, spurring them to double down on self-reliance. Huawei’s new AI chip push, along with Alibaba and Baidu’s efforts, are the fruits of that realization. As one Chinese professor, Alfred Wu, observed, Beijing is keen to showcase such progress: “China is trying to say that they’re doing very well on many fronts… Xi Jinping will be more confident when speaking with Donald Trump,” referring to the political signaling behind these tech feats reuters.com reuters.com.
In the next 1-2 years, we’ll likely see Alibaba formally launch its AI inference chip (possibly at its Cloud conference), and Baidu potentially using Kunlun chips to power parts of its AI cloud offerings (Baidu already advertises its Kunlun chip for clients in finance and Internet sectors). If these chips prove effective, they could grab significant domestic market share: for instance, if Alibaba’s chip can handle AI inference at lower cost-per-query than Nvidia, Alibaba Cloud would preferentially deploy it for Chinese customers’ AI applications. That bites into Nvidia’s future growth in China. The ripple effect could extend globally if, say, Alibaba’s chip is available through its cloud to international users of Alibaba Cloud – though U.S. or European customers might be wary of switching to a nascent ecosystem.
In summary, Alibaba and Baidu’s embrace of homegrown chips is a key development in China’s tech decoupling. It shows that the country’s top Internet companies are aligning with Huawei’s hardware push, each contributing in their domain (Huawei in hardware systems, Alibaba/Baidu in cloud services and AI frameworks). This multi-front effort improves China’s odds of building a self-sufficient AI technology stack in the long run. For Nvidia, it’s a one-two punch: not only is Huawei launching a direct competitor in hardware, but its biggest customers in China are gradually reducing dependence on Nvidia’s chips. This trend, if it continues, could reshape the AI chip market in China within a few years, creating a parallel ecosystem largely insulated from U.S. technology. It’s essentially the materialization of what U.S. sanctions sought to prevent – albeit born from those very sanctions.
Geopolitical Ramifications and the Road Ahead
Huawei’s new AI chip offensive cannot be viewed in isolation from the wider geopolitical context. It sits at the intersection of technology and international power dynamics, namely the ongoing U.S.-China “tech war.” The developments here carry profound implications for national security, trade policy, and the future of the global tech supply chain.
First and foremost, Huawei’s breakthrough underscores the limits of sanctions as a tool to halt technological progress – and in some cases, their unintended consequences. The U.S. imposed sweeping export controls on Huawei starting in 2019 (and broadened them in 2020 and 2021), aiming to cut off Huawei from advanced chips and manufacturing. For a while, it appeared devastating: Huawei’s revenues plunged, its smartphone business almost died, and its presence in 5G networks abroad was curbed. Yet, fast forward to 2025, and Huawei not only survived but has rebuilt a pipeline of advanced chips. As one analyst put it, “Fast-forward to 2025, and Huawei has remained resilient in the face of U.S. sanctions.” scmp.com In defiance of expectations, Huawei managed a 7nm chip in the Mate 60 Pro smartphone, shocking industry observers by achieving this without access to EUV lithography reuters.com reuters.com. TechInsights analysts lauded it as a sign of “the resilience of [China’s] chip technological ability” without cutting-edge tools reuters.com. This smartphone launch – done quietly while U.S. Commerce Secretary Gina Raimondo was visiting China – was widely seen as a symbolic victory for Huawei and prompted celebratory headlines in Chinese media reuters.com internationalbanker.com.
However, experts caution that these achievements often come at huge cost and inefficiency, essentially revealing a willingness by China to “pay any price” for tech self-sufficiency. Tilly Zhang of Gavekal (a research firm) noted that Huawei’s 7nm phone chip likely had low yields and very high costs: “They have demonstrated they are willing to accept much higher costs than normally considered worthwhile… only Huawei’s large financial resources and government subsidies allow it to sell phones with these chips at normal prices.” reuters.com reuters.com In other words, Huawei (with state backing) is eating the cost to prove a point and to get the technology rolling. The same may be true for its AI chips – producing thousands of large AI chips domestically might be far more expensive than buying from Nvidia in a free market, but if Beijing foots part of the bill (via subsidies, research grants, cheap loans), Huawei can forge ahead and even price its AI solutions competitively to gain adoption. National interest is effectively subsidizing Huawei’s AI climb, because reducing reliance on U.S. tech is a strategic priority for China.
From the U.S. perspective, Huawei’s advances are likely to prompt soul-searching and possibly a redoubling of restrictions. As Jefferies analysts said after the Mate 60 Pro revelation, “TechInsights’ findings could trigger a probe from the U.S. Commerce Department… and prompt Congress to include even harsher tech sanctions in [upcoming legislation]… Overall the US-China tech war is likely to escalate.” reuters.com reuters.com Indeed, U.S. lawmakers have already been vocal. Some in Congress saw Huawei’s 7nm chip as a sign that current measures have loopholes – such as allowing ASML’s older DUV lithography machines to be sold to China, which SMIC apparently used in clever ways to make 7nm chips reuters.com. We may see moves to further tighten exports of chipmaking equipment (e.g., restricting certain DUV tools or the specialized materials used in advanced nodes). The U.S. has also been considering closing loopholes on chips just below the current performance threshold (for instance, Nvidia’s A800 was just under the limit – new rules could lower that threshold). If Huawei is doubling compute every year on Ascend, U.S. officials might attempt to hamper it by, say, barring any U.S.-origin EDA software updates or probing any foreign partners that assist Huawei. However, as this saga shows, it’s a game of whack-a-mole: China is finding paths around restrictions, even if indirect and costly.
On the flip side, China’s government is seizing the narrative. Huawei’s chip announcements were timed right before a scheduled meeting between President Xi and U.S. President Trump (who is back in office by 2025) reuters.com. Analysts saw it as giving Xi leverage in talks, showing that “China is doing very well on many fronts” despite U.S. pressure reuters.com. Xi can point to Huawei’s progress as evidence that China’s technological trajectory cannot be contained. Alfred Wu of NUS interprets it as boosting Xi’s confidence in negotiations, even if in reality “tensions are quietly escalating” beneath polite diplomacy reuters.com. The Chinese side likely hopes that demonstrating self-reliance will compel the U.S. to reconsider or relax some restrictions, realizing that they’re only forcing China to reinvent the wheel. In practice though, such demonstrations might harden the U.S. resolve to maintain its edge – it’s a double-edged sword.
One interesting twist has been the Trump administration’s approach in 2025. In an unusual development in August, President Trump personally struck a deal granting Nvidia an export license for its H20 chips to China in exchange for a 15% cut of those sales going to the U.S. government reuters.com reuters.com. This highly unorthodox arrangement (essentially a tariff-like kickback on chip sales) was a sign of both how crucial Nvidia’s China business is and how the U.S. is experimenting with ways to balance national security with business interests. Jensen Huang initially said he wouldn’t support such deals, then one was made a few days later reuters.com. Yet, as of mid-September, Nvidia hadn’t shipped any H20 under that deal, and the U.S. hadn’t figured out the mechanism for collecting the 15% fee reuters.com reuters.com. Now, with China halting purchases, even that deal is up in the air. It highlights a point: U.S. policy on AI chips is not monolithic; there’s a tug-of-war between hawks wanting total bans and those looking for moderated control (sell slightly weaker chips, maybe profit from it, and keep some influence). But China’s recent ban essentially said: “We won’t even buy your neutered chips; we’ll focus on our own.” This tit-for-tat escalation – U.S. restricts exports; China retaliates by restricting imports – has an air of a “tech decoupling” accelerating.
For Nvidia, the geopolitical clash is a serious business risk. The Chinese market contributed about 13% of Nvidia’s revenue last year reuters.com, and demand from China’s big internet companies was a major driver of its growth. Losing that market (or a large portion of it) due to political bans could cost Nvidia billions and slow its breathtaking growth. Nvidia’s stock actually dipped ~3% on the reports of China’s ban reuters.com reuters.com, reflecting investor concern. Jensen Huang’s measured comments in London (saying he’s “disappointed” but “patient” and acknowledging larger political agendas at play) show Nvidia is trying to tread carefully, not antagonize Beijing, while lobbying hard in Washington to maximize what it can sell reuters.com reuters.com. Reuters reported Nvidia has dramatically boosted its lobbying in D.C., spending nearly $1.9 million in the first half of 2025 (triple what it spent in all of 2024) reuters.com reuters.com. The company is fighting on both fronts: seeking exceptions or special arrangements from U.S. regulators, and simultaneously trying to appease Chinese officials by saying Nvidia wants to continue serving China “as they wish.” reuters.com reuters.com But Huang’s candid remark that they are “pawns in a digital Cold War” rings true reuters.com. Ultimately, decisions by governments may force the hand of companies and reshape global tech collaboration.
From a broader lens, the progress of Huawei and its peers has geopolitical ramifications beyond just U.S.-China. It could influence other countries’ stances. For instance, U.S. allies in Europe and Asia have mostly aligned with Washington’s restrictions (e.g., Netherlands and Japan limiting lithography equipment exports, UK and others removing Huawei 5G gear). But if China starts demonstrating parity in key tech, some nations might rethink how much they want to curb ties. Developing countries or those under U.S. sanctions (like Russia or Iran) might see Chinese tech as an attractive alternative to Western tech that is off-limits to them. There’s a soft power element: showcasing a supercomputer or advanced AI built on Chinese chips can be used by Beijing to promote its tech standards and win influence in global forums. Conversely, it will alarm security hawks who fear advanced Chinese AI capabilities could have military applications (e.g., training AI for autonomous drones or surveillance). The Pentagon will be watching whether Huawei’s AI clusters end up powering China’s defense or intelligence projects. In fact, U.S. export controls explicitly cite military end-use as a rationale; Huawei insists its work is commercial, but the lines can blur in dual-use technologies.
Another ramification is the potential fragmentation of the tech ecosystem. We could see a world where there are two parallel AI tech stacks: one dominated by U.S. and allied companies (Nvidia, AMD, Google TPUs, etc., with software like PyTorch/TensorFlow) and one dominated by Chinese companies (Huawei Ascends, Alibaba Hanguang, Baidu Kunlun, with frameworks like MindSpore or PaddlePaddle). They might not interoperate much due to sanction compliance and lack of standardization across the divide. This “technological bifurcation” would be inefficient globally – duplicating R&D and splitting markets – but increasingly plausible if trust erodes and trade barriers persist. Geopolitically, that aligns with a broader U.S.-China decoupling in supply chains, and technologically it means AI progress could happen in silos to some extent, with less cross-pollination of ideas and talent.
On the other hand, crises often spur innovation. The AI chip race now includes one of the world’s most resourceful telecom companies (Huawei), two of the biggest consumer internet companies in China (Alibaba, Baidu), and the established Western semiconductor rivals (AMD, Intel) all challenging Nvidia. This can drive rapid advancements in architecture design, memory tech, and system integration. Global AI advancement might actually accelerate as a result of this competition – albeit with the risk of overlapping efforts and reduced cooperation. Policymakers around the world will have to consider how to manage this competition: whether to impose standards, how to prevent dangerous uses, and whether to try and negotiate limits (for instance, some analysts have floated the idea of an “AI chip treaty” or mutual scaling back of certain bans, but that seems distant at the moment).
Looking ahead, a few key things to watch will be:
- Can Huawei mass-produce its advanced chips? Announcing is one thing, but delivering Ascend 950 at scale by next year will be proof of the pudding. If it does, it implies China’s semi manufacturing (via SMIC or maybe chiplet assembly from multiple lower-end fabs) has reached a new level.
- Will Chinese AI firms migrate en masse to domestic hardware? If by 2026 companies like Tencent, ByteDance, JD.com, etc. start publicly announcing use of Huawei/Alibaba chips in their data centers (not just experimentally, but full deployment), that will signal a major industry shift internally.
- How will the U.S. respond? There is already talk of a new round of export controls from the Biden (or now Trump) administration targeting even 28nm chip tools or AI software. The U.S. might also double down on supporting its companies (for example, the CHIPS Act is boosting domestic fab capacity to ensure Nvidia, AMD have supply, and perhaps to allow future chips to be made without TSMC so that controlling sales is easier). Also, the U.S. could tighten investment bans – recently it moved to restrict U.S. venture capital from funding Chinese AI chip startups, to slow the flow of know-how and money.
- Geopolitical dialogues: The fact that Xi and Trump are meeting suggests there’s negotiation happening. Chips are almost certainly on the agenda. There could be some agreements (even if temporary) – e.g., China might offer to enforce intellectual property or not use chips for military purposes in exchange for some easing, or the U.S. might use chip exports as a bargaining chip (pun intended) in broader trade deals. It’s high-stakes diplomatic poker, and Huawei’s announcement was likely calculated to strengthen China’s hand.
In conclusion, Huawei’s new AI chip and computing power ambitions are far more than an engineering story; they are a geopolitical statement. They demonstrate China’s determination to join – and eventually lead – the ranks of advanced semiconductor powers, despite concerted efforts by the U.S. to bar the door. As a result, we are witnessing a historic reconfiguration of the global tech landscape. Policymakers, industry leaders, and analysts around the world are reacting with a mix of awe, concern, and resolve. “People assume tensions will be eased… but it’s quietly escalating,” as Alfred Wu observed reuters.com. Each new Huawei chip or Nvidia ban seems to ratchet things up another notch.
For the public and tech enthusiasts, the takeaway is twofold: expect faster innovation as competition heats up – but also expect the AI hardware space to become increasingly politicized. In the coming years, the question “Which chip runs your AI?” might carry geopolitical weight alongside technical merit. Huawei’s latest gambit ensures that this dramatic contest – between Silicon Valley and Shenzhen, between closed and open ecosystems, and between global cooperation and tech nationalism – will be one of the defining stories in technology for the rest of the decade.
People visiting Huawei’s booth at the World AI Conference in Shanghai (July 2025). Huawei’s public reveal of its AI chips and supercomputing plans at this event signaled China’s determination to compete head-on with Nvidia in the AI hardware arena reuters.com reuters.com.
Sources:
- Brenda Goh & Che Pan, Reuters – “China’s Huawei hypes up chip and computing power plans in fresh challenge to Nvidia” (Sept 18, 2025) reuters.com reuters.com
- Che Pan & Brenda Goh, Reuters – “Key products in Huawei’s AI chips and computing power roadmap” (Sept 18, 2025) reuters.com reuters.com
- Brenda Goh & Liam Mo, Reuters – “Huawei shows off AI computing system to rival Nvidia’s top product” (July 26, 2025) reuters.com reuters.com
- Associated Press (via ABC News) – “How Huawei plans to outperform global tech leaders with less powerful chips” (Sept 18, 2025) abcnews.go.com abcnews.go.com
- Deborah Sophia, Reuters – “Alibaba, Baidu begin using own chips to train AI models, The Information reports” (Sept 11, 2025) reuters.com reuters.com
- Deborah Sophia, Reuters – “China’s Alibaba develops new AI chip to help fill Nvidia void – WSJ” (Aug 29, 2025) reuters.com reuters.com
- Max A. Cherney & Stephen Nellis, Reuters – “AMD unveils AI server as OpenAI taps its newest chips” (June 12, 2025) reuters.com reuters.com
- Arsheeya Bajwa, Reuters – “Nvidia CEO Huang caught between US, China’s ‘larger agendas’” (Sept 17, 2025) reuters.com reuters.com
- David Kirton & Max Cherney, Reuters – “Huawei’s new chip breakthrough likely to trigger closer US scrutiny” (Sept 5, 2023) reuters.com reuters.com
- CRN News – “Dell first to market with Intel’s Gaudi 3 AI chips in servers” (Aug 2025) crn.com crn.com