NVIDIA Stock Set to Soar? Latest AI Boom & Stock Forecast Revealed

Nvidia’s AI Empire Unleashed: Mini Supercomputers, Mega-Deals and a Global Chip Arms Race

  • Pocket-Sized AI Supercomputers: Nvidia launched DGX Spark, a petaflop-scale AI supercomputer small enough to carry, hand-delivering the first unit to Elon Musk’s SpaceX [1] [2]. The Grace-Blackwell powered device packs 1 quadrillion operations/sec into a book-sized chassis, aiming to put data-center class AI on any desktop.
  • $100 Billion OpenAI Alliance: Nvidia struck a massive deal to invest up to $100 billion in OpenAI, funding 10 gigawatts of new AI supercomputing for ChatGPT’s creator [3] [4]. The partnership guarantees OpenAI priority access to Nvidia’s latest chips – and vice versa – raising antitrust alarms that this “AI superpower” duo could unfairly dominate the industry [5] [6].
  • Rivals and New Alliances: In a bid to counter Nvidia’s dominance, OpenAI inked a blockbuster deal with AMD to buy 6GW of AMD chips (Instinct MI450 GPUs) and even take a ~10% stake in AMD [7] [8]. “This is a breakthrough achievement for AMD… a strategic supplier for the leading AI company,” said analyst Patrick Moorhead [9]. Yet OpenAI’s Sam Altman insists they still “need much more compute” from Nvidia too [10] – highlighting insatiable AI demand.
  • Nvidia’s Big AI Bets: Nvidia is bankrolling up-and-comers in the AI race. It led a $2 billion funding round for startup Reflection AI, founded by ex-DeepMind researchers and now valued at $8B [11] [12]. And Elon Musk’s new venture xAI is reportedly raising $20 billion with Nvidia contributing chips and about $2B in financing [13] [14] – aligning Nvidia with Musk’s effort to challenge OpenAI.
  • Enterprise Partnerships Boom: Nvidia and Oracle announced a partnership to power “sovereign AI” in government clouds, showcasing Abu Dhabi’s AI-transformed digital government using Nvidia hardware and software [15] [16]. Nvidia also joined Microsoft, BlackRock and others in a $40 billion consortium buyout of Aligned Data Centers to secure critical AI cloud capacity [17] [18] – one of the largest data-center acquisitions ever.
  • Tech Breakthroughs: Nvidia’s next-gen Blackwell GPUs are shattering AI benchmarks, delivering up to 15× performance gains over the previous generation in new inference tests [19] [20]. The company also unveiled a radical 800-volt data center power architecture, using GaN and SiC chips co-developed with Navitas to boost efficiency and cut energy loss in “AI factories” [21]. In robotics, Nvidia open-sourced its Newton physics engine (built with Google DeepMind) and introduced an open AI “brain” for humanoid robots, saying “humanoids are the next frontier of physical AI” and that these tools will help bring robots “from research into everyday life” [22] [23].
  • Regulatory Crossroads: Geopolitical maneuvering around Nvidia is intensifying. In Washington, officials relaxed export curbs to allow $10+ billion worth of Nvidia’s AI chips to the UAE under a new pact – provided they’re used in U.S.-operated cloud centers [24] [25]. Meanwhile, China has reportedly ordered tech firms to halt purchases of certain Nvidia AI chips and is tightening port inspections to enforce its own chip import bans [26] [27]. With Nvidia now at the heart of U.S.–China tech tensions, CEO Jensen Huang finds himself “caught between” dueling national agendas [28] [29].

AI Alliances and Rivalries: OpenAI, xAI, and a $100B Power Play

Nvidia’s dominance in AI computing has led to both blockbuster partnerships and fierce competition in recent weeks. The most dramatic development is Nvidia’s sweeping alliance with OpenAI – a deal valued at up to $100 billion that intertwines the leading AI chip supplier with the maker of ChatGPT [30] [31]. Under the agreement, announced late September, Nvidia will take a sizable (non-voting) stake in OpenAI and in return OpenAI commits to buying Nvidia’s cutting-edge GPUs to build out an unprecedented 10 gigawatts of AI supercomputing capacity [32] [33]. “Everything starts with compute,” OpenAI CEO Sam Altman said, emphasizing that vast processing power is key to future breakthroughs [34]. Industry analysts have noted the circular nature of this pact – essentially Nvidia is funding OpenAI so OpenAI can spend that money on Nvidia hardware, locking in a huge long-term customer [35]. Stacy Rasgon, a veteran chip analyst, called it an unusual strategy that virtually guarantees demand for Nvidia’s products, even as it raises eyebrows among regulators [36].

Indeed, the antitrust concerns were immediate. “This raises significant antitrust concerns,” warned lawyer Andre Barlow, noting that pairing the top AI chip maker with the top AI software player could squeeze out others [37]. U.S. regulators have reportedly opened inquiries into whether Nvidia, OpenAI (and OpenAI’s backer Microsoft) are leveraging their dominance to stifle competition [38]. A Department of Justice official cautioned that authorities won’t allow powerful firms to “foreclose access to key inputs” like AI chips through such alliances [39]. For now, the Nvidia–OpenAI mega-deal is a partnership (not a merger), but it has clearly blurred traditional boundaries and put rivals on notice.

One rival rising to the challenge is AMD. In early October, OpenAI also unveiled a surprise multi-year partnership with AMD, Nvidia’s chief competitor in advanced chips [40] [41]. OpenAI committed to purchase about 6 gigawatts of AMD GPUs over the next few years – starting with a gigawatt of new MI450 accelerators in 2026 – and secured an option to take up to a 10% equity stake in AMD as those chips roll out [42]. “Our view is that we’re nowhere near the top of the demand curve…this is the foundation of a long-term growth cycle for AI,” AMD executive Forrest Norrod said of the deal [43], underscoring that OpenAI’s ravenous need for hardware will support multiple suppliers. Analysts described it as a “breakthrough” that instantly makes AMD a strategic chip provider to a top AI firm [44] – a role Nvidia alone used to enjoy.

At the same time, Elon Musk’s new startup xAI is making waves of its own – with Nvidia’s help. Musk, a vocal critic of OpenAI, has assembled investors (including Nvidia, Microsoft and others) to finance xAI’s ambitious plan to build AI systems rivaling ChatGPT. According to Bloomberg and Reuters, xAI’s fundraising target has swelled to $20 billion, and Nvidia is contributing around $2 billion plus access to its GPUs for xAI’s compute needs [45] [46]. This hardware-backed financing model shows how Nvidia is seeding multiple players in the AI arena. In another major bet, Nvidia’s venture arm led a $2 billion funding round for Reflection AI, a young startup founded by ex-DeepMind researchers Misha Laskin and Ioannis Antonoglou [47]. That round, one of the largest ever for an AI startup, catapulted Reflection AI’s valuation from just ~$500M to $8B in mere months [48] [49]. Reflection’s mission is developing “superintelligent” AI agents (initially to automate coding), with an open-source ethos aimed at keeping pace with Chinese labs [50]. Nvidia reportedly chipped in roughly $250–$500M of the investment [51] – a clear sign it wants to nurture startups that will inevitably require enormous GPU resources.

These high-profile deals illustrate the frenzied race in AI: leading firms are trading giant sums of cash, equity and hardware access to secure future dominance. Over half of all venture capital dollars in 2025 have flowed into AI startups [52], and even OpenAI itself raised an unprecedented $40B private round earlier this year [53]. “The AI industry’s equivalent of the gold rush” is how analyst Holger Mueller describes the current boom, “with companies hunting for computing resources rather than precious metals.” [54] Yet amid the exuberance, there are cautionary voices warning of a bubble. “Any startup with an AI label will be valued right up there at huge multiples…that might be fair for some and probably not for others,” one tech investor noted skeptically [55]. For Nvidia, however, the strategy is clear: bet on every horse in the AI race, so that whoever wins will further fuel demand for Nvidia’s GPUs.

New Hardware: From Mini Supercomputers to 800V “AI Factories”

Even as it inks mega-deals, Nvidia continues to push the envelope on AI hardware and software. This week the company officially rolled out NVIDIA DGX Spark, billing it as “the world’s smallest AI supercomputer.” CEO Jensen Huang literally delivered the first unit by hand to Elon Musk at SpaceX’s Starbase in Texas [56] [57]. Roughly the size of a thin hardcover book and weighing just 1.2 kg, DGX Spark nonetheless contains a full petaflop of AI performance (at FP4 precision) thanks to Nvidia’s new Grace-Blackwell “GB10” Superchip [58]. That chip fuses a 72-core Grace CPU with a Blackwell-family GPU via high-speed NVLink, along with 128GB of unified memory – essentially packing a data center’s might into a lunchbox PC [59] [60]. “Imagine delivering the smallest supercomputer next to the biggest rocket,” Huang quipped during the Musk meetup [61]. The DGX Spark is intended for developers, researchers and creators to run advanced AI models at their desk without needing a cloud cluster [62] [63]. Early partners like Dell, HP, Lenovo and ASUS have announced mini-systems based on the Spark design, offering petaflop desktops that can even be clustered together for larger workloads [64] [65]. By putting “AI within arm’s reach” of innovators everywhere [66], Nvidia aims to accelerate AI development at the edge and maintain its grip on the AI computing ecosystem.

Nvidia’s latest Blackwell generation of chips is also proving its mettle in industry benchmarks. This month Blackwell-based systems dominated a new InferenceMAX v1 test by SemiAnalysis, outperforming the prior-generation Hopper GPUs by an order of magnitude in efficiency [67] [68]. One report noted Blackwell delivered up to 15× the performance of Hopper in certain AI inference tasks [69], thanks to features like faster memory and new low-precision tensor cores. Nvidia has touted these results as evidence that its pace of innovation isn’t slowing – critical as competitors like Google’s TPUs or startups like Cerebras vie for attention. The Grace Hopper “GH200” combo chip (which pairs an ARM CPU with Hopper GPU) also recently hit full production, targeting giant AI training jobs. And on the software side, Nvidia’s latest TensorRT-LLM libraries and open-source tools are helping squeeze more speed from its silicon, ensuring that owning Nvidia hardware comes with a state-of-the-art software stack.

Beyond chips and boards, Nvidia is rethinking the physical infrastructure of AI data centers. On October 13, it revealed a bold new 800-Volt DC architecture for powering large “AI factories” – a major leap from today’s 48V rack standards [70] [71]. In collaboration with power semiconductor specialist Navitas, Nvidia developed advanced gallium nitride (GaN) and silicon-carbide (SiC) power modules that can directly feed GPUs at 800V. This design eliminates many conversion steps, improving efficiency and allowing much higher energy density in server racks [72] [73]. Navitas’ CEO Chris Allexandre said his company is “proud to support this shift” to 800V with custom GaN/SiC solutions that meet the massive currents and reliability needs of next-gen AI data centers [74]. By moving from legacy 54V distribution to 800V, Nvidia claims it can slash power losses and support megawatt-scale racks for the first time [75] [76]. Given that electricity costs and power delivery are becoming limiting factors in AI supercomputing, this kind of innovation could give Nvidia-equipped facilities a practical edge in performance-per-watt.

Nvidia is also extending its portfolio beyond GPUs into networking and interconnects to tackle bottlenecks in AI clusters. The company’s Spectrum-X Ethernet networking gear – optimized for AI data flow – is now being adopted by cloud giants like Meta and Oracle [77]. Spectrum-X switches and adapters can reduce AI training times by speeding up communication between servers, an area traditionally dominated by firms like Cisco or Mellanox (which Nvidia acquired). By providing end-to-end solutions (compute, networking, and even power), Nvidia is positioning itself as a one-stop platform for AI infrastructure.

On the software and AI frameworks front, Nvidia used its recent developer forums to double down on open-source and community collaboration. It open-sourced the Newton physics engine for robotics – a GPU-accelerated simulator co-developed with Google DeepMind and Disney that can model complex real-world interactions (like a bipedal robot walking on sand) with high fidelity [78] [79]. Newton is now managed by the Linux Foundation, inviting researchers globally to contribute [80]. Alongside that, Nvidia introduced Isaac GR00T (pronounced “Groot”) – a new open foundation AI model that gives robots a form of common-sense reasoning. Integrated with a vision-language model called Cosmos Reason, the system helps humanoid robots break down ambiguous instructions into actionable steps using prior knowledge and physics understanding [81] [82]. “Humanoids are the next frontier of physical AI, requiring the ability to reason, adapt and act safely in an unpredictable world,” said Rev Lebaredian, Nvidia’s VP of simulation technology [83]. “With these latest updates, developers now have the three computers to bring robots from research into everyday life – with Isaac GR00T serving as the robot’s brains, Newton simulating their body, and NVIDIA Omniverse as their training ground.” [84] By giving away more of its AI toolset to researchers, Nvidia hopes to spur advances (and ultimately drive demand for its hardware) across robotics, edge computing, and beyond.

Enterprise Moves: Cloud Deals, Data Centers and New Markets

As AI adoption explodes across industries, Nvidia is cementing strategic partnerships to ensure its technology underpins as many deployments as possible. At Oracle’s recent CloudWorld event, Nvidia and Oracle announced an expanded collaboration to deliver “sovereign cloud” AI solutions for governments [85]. A showcase project is underway in Abu Dhabi, where Oracle’s cloud infrastructure combined with Nvidia’s AI platforms is powering an ambitious plan to make the UAE capital’s government fully AI-native by 2027 [86]. Backed by a 13 billion AED (~$3.5B) investment, Abu Dhabi’s government is rolling out secure OCI Dedicated Regions (essentially local Oracle cloud zones) equipped with Nvidia GPUs and software to automate hundreds of public services [87] [88]. Citizens are beginning to see benefits like AI assistants in 15 languages, instant eligibility checks for benefits, and automated permit approvals, all running on Nvidia-accelerated infrastructure managed entirely within the country’s borders [89] [90]. “The Abu Dhabi Government Digital Strategy reflects our vision of being an AI-native government, seamlessly integrating AI across all systems,” said DGE chairman Ahmed Al Kuttab [91]. For Nvidia, which provides the GPUs and AI software stack, this project serves as a blueprint to entice other governments and enterprises that demand data sovereignty. Oracle, for its part, is using Nvidia’s tech as a selling point for its cloud – a notable win for Nvidia given Oracle’s vast public-sector footprint.

Meanwhile, Nvidia is literally buying into the cloud to secure capacity for itself and its partners. Just this week, a consortium called the AI Infrastructure Partnership (AIP) – led by Microsoft, Nvidia, and BlackRock – announced plans to acquire Aligned Data Centers for a staggering $40 billion [92]. Aligned is a major data center operator with 50 facilities across the U.S., Latin America and beyond, totaling about 5 gigawatts of capacity [93] [94]. The deal, one of the largest ever in the data center industry, is aimed at locking down critical server space and power for AI as demand far outstrips supply. The AIP consortium was formed last year by investors including BlackRock’s infrastructure arm, sovereign funds from Abu Dhabi and Singapore, and tech players like Microsoft; Nvidia and Elon Musk’s xAI joined in March [95] [96]. By owning Aligned’s facilities, these members ensure their AI ventures (be it Microsoft’s Azure, Nvidia’s cloud services, or xAI’s upcoming endeavors) won’t be bottlenecked by data center shortages. “It’s about advancing the infrastructure needed to power the future of AI,” the group said, describing the acquisition as akin to securing the picks and shovels of the AI gold rush [97] [98]. Holger Mueller noted that big tech firms are now pouring billions into the backend plumbing of AI – not unlike railroads or oil pipelines in past eras – to make sure they can train and deploy ever-larger models [99].

Nvidia’s influence is also spreading into new markets through unconventional partnerships. In a surprising crossover between competitors, Nvidia agreed to invest $5 billion in Intel, the struggling CPU giant, as part of a collaboration announced in mid-September [100] [101]. Intel’s latest CEO, Lip-Bu Tan, posted a photo with Jensen Huang as the two companies unveiled a plan to connect Intel’s x86 processors with Nvidia’s AI accelerators more tightly using Nvidia’s NVLink interconnect [102]. Specifically, they will co-develop “custom data center and PC chips” that integrate Intel CPUs and Nvidia GPUs on one package [103]. Huang said the partnership will allow Nvidia to scale up its high-end systems (combining 72 GPUs with custom CPUs in a single rack) and also to attack the PC market by creating hybrid CPU-GPU system-on-chips for laptops [104]. “There are 150 million laptops sold per year,” Huang noted, hinting at a huge new opportunity. “We’re now creating a system-on-a-chip that fuses two processors into one giant SoC… a new class of integrated laptops that the world has never seen before.” [105] By teaming up with Intel, Nvidia can broaden its reach beyond GPUs – and interestingly, the deal came right as the U.S. government took a 10% stake in Intel as part of CHIPS Act aid [106]. Analysts like Pat Moorhead observed that Nvidia’s investment likely scores political points for supporting an American chip peer [107]. But more importantly, it shows Nvidia’s strategic shift: it is no longer content to be “just” a GPU maker, but is evolving into a platform company spanning CPUs, networking, and full-stack systems.

Regulatory and Geopolitical Tensions Escalate

Nvidia’s breakneck expansion is unfolding against a tense geopolitical backdrop. As the indispensable supplier of advanced AI chips, Nvidia has become a focal point in the tech standoff between the U.S. and China. In October, Beijing moved to clamp down further on Nvidia’s products: Chinese regulators reportedly ordered tech giants like Alibaba and ByteDance to stop purchasing certain Nvidia AI chips, according to financial media reports [108]. Authorities have also tightened inspections at ports to enforce earlier bans on exporting high-end U.S. semiconductors to China [109]. Essentially, China wants to close loopholes that allowed prohibited Nvidia GPUs (like the A100/H100 series) to still find their way into the country. The Chinese government, facing its own lagging domestic chip capabilities, framed the move as fighting “discriminatory practices” by the U.S. while urging dialogue to maintain supply chain stability [110]. Nvidia CEO Jensen Huang – whose company historically derived up to a quarter of its data center chip sales from China – has found himself in an uneasy middle ground. “Successive U.S. administrations have restricted China’s access to advanced chips, while Beijing has responded by pressing its tech firms to cut reliance on us,” Huang told an industry forum, describing being “caught between larger agendas.” [111] [112] So far, Nvidia has navigated the situation by producing modified “China-only” chip models (with capped performance) to comply with U.S. export rules, but further decoupling by China could hit future sales.

In Washington, the policy winds have shifted with a new administration this year, leading to some surprising exceptions to the chip export curbs. Earlier this month, the U.S. Commerce Department granted Nvidia a special license to ship tens of billions of dollars worth of AI GPUs to the United Arab Emirates (UAE) [113] [114]. This came as part of a broader tech cooperation agreement tied to the UAE’s commitment to invest $1.4 trillion in U.S. projects over the next decade [115]. Notably, the license allows American cloud providers (like Oracle) to operate Nvidia’s top-tier chips within the UAE for local AI initiatives – a workaround to blanket bans, since the hardware will remain under U.S. oversight [116] [117]. However, sales to UAE’s G42 (an Abu Dhabi AI company with ties to China) remain restricted for now [118] [119]. The deal illustrates a new bilateral approach to export controls: instead of forbidding all advanced chip sales to entire regions, the U.S. is striking country-specific deals (UAE, and reportedly others like India and Israel) that permit Nvidia to supply allies in exchange for assurances that the tech won’t be diverted to rivals [120] [121]. Commerce Secretary Howard Lutnick hinted this may be a template for balancing national security with commercial interests, as the U.S. tries to maintain its edge in the global AI race.

At the same time, U.S. lawmakers are debating measures to prioritize domestic needs amid the GPU shortage. In September, a Senate bill was floated that would force Nvidia and peers to give U.S. buyers first priority on new AI chip supplies before selling overseas [122]. Nvidia publicly pushed back, with one executive calling it “‘doomer’ science fiction” to think America lacks GPUs because of exports [123]. The company emphasized it is not depriving U.S. customers to serve the rest of the world – rather, the sheer demand from all sectors is overwhelming supply. Indeed, despite geopolitical limits, Nvidia has been selling every H100 GPU it can produce, and still has months-long backorders from cloud providers and enterprises globally.

Finally, Nvidia’s $100B OpenAI tie-up has invited regulatory scrutiny beyond antitrust. In Europe and the U.K., officials are watching closely given Nvidia’s central role in the AI value chain. The deal’s announcement coincided with a high-profile AI Safety Summit in Britain, where discussions included whether dominant compute providers could become “choke points” for AI development. Any formal investigation remains to be seen, but Nvidia’s moves are clearly on the radar of policy makers worried about too much AI power concentrating in a few hands.

Conclusion

From splashy hardware launches to eye-popping investment deals, Nvidia has spent the past few weeks reinforcing why it sits at the center of the AI universe in 2025. The company is simultaneously a supplier, investor, partner – even a kingmaker – in nearly every major AI initiative unfolding today. By seeding startups with cash and GPUs, rolling out new superchips and software, and aligning with cloud and government heavyweights, Nvidia is extending its influence far beyond its Silicon Valley roots.

Yet this rapid expansion comes with high stakes. Nvidia’s strategic maneuvers have set off an international tech arms race, forcing allies and adversaries alike to respond – whether it’s AMD vying for OpenAI’s business, or China scrambling to replace Nvidia’s silicon, or regulators pondering new guardrails. As CEO Jensen Huang often notes, we are in an “era of exponential AI”, and Nvidia intends to both fuel and benefit from that exponential growth. The latest developments show a company at full throttle: breaking new ground technically, chasing ever-larger opportunities, and navigating geopolitical minefields, all at once. How long Nvidia can sustain this breakneck pace remains to be seen, but for now, its momentum in the AI arena appears unstoppable.

Sources: TS2 [124] [125]; Wired [126] [127]; Bloomberg [128] [129]; Reuters [130] [131]; NVIDIA Newsroom [132] [133]; SiliconANGLE [134] [135]; TS2 [136] [137]; NVIDIA Blog [138].

AI-Supercomputers for Your Home: Gamechanger or Overkill?

References

1. blogs.nvidia.com, 2. blogs.nvidia.com, 3. ts2.tech, 4. ts2.tech, 5. ts2.tech, 6. ts2.tech, 7. www.wired.com, 8. www.wired.com, 9. www.wired.com, 10. www.wired.com, 11. ts2.tech, 12. ts2.tech, 13. www.reuters.com, 14. finance.yahoo.com, 15. blogs.nvidia.com, 16. blogs.nvidia.com, 17. siliconangle.com, 18. siliconangle.com, 19. blog.vllm.ai, 20. www.ainvest.com, 21. ts2.tech, 22. nvidianews.nvidia.com, 23. nvidianews.nvidia.com, 24. www.tomshardware.com, 25. www.tomshardware.com, 26. www.reuters.com, 27. www.reuters.com, 28. www.tomshardware.com, 29. www.wired.com, 30. ts2.tech, 31. ts2.tech, 32. ts2.tech, 33. ts2.tech, 34. ts2.tech, 35. ts2.tech, 36. ts2.tech, 37. ts2.tech, 38. ts2.tech, 39. ts2.tech, 40. www.wired.com, 41. www.wired.com, 42. www.wired.com, 43. www.wired.com, 44. www.wired.com, 45. www.reuters.com, 46. finance.yahoo.com, 47. ts2.tech, 48. ts2.tech, 49. ts2.tech, 50. ts2.tech, 51. ts2.tech, 52. ts2.tech, 53. ts2.tech, 54. siliconangle.com, 55. ts2.tech, 56. blogs.nvidia.com, 57. blogs.nvidia.com, 58. blogs.nvidia.com, 59. blogs.nvidia.com, 60. blogs.nvidia.com, 61. blogs.nvidia.com, 62. blogs.nvidia.com, 63. blogs.nvidia.com, 64. blogs.nvidia.com, 65. blogs.nvidia.com, 66. blogs.nvidia.com, 67. blog.vllm.ai, 68. www.ainvest.com, 69. www.ainvest.com, 70. ts2.tech, 71. ts2.tech, 72. ts2.tech, 73. ts2.tech, 74. ts2.tech, 75. ts2.tech, 76. ts2.tech, 77. nvidianews.nvidia.com, 78. nvidianews.nvidia.com, 79. nvidianews.nvidia.com, 80. nvidianews.nvidia.com, 81. nvidianews.nvidia.com, 82. nvidianews.nvidia.com, 83. nvidianews.nvidia.com, 84. nvidianews.nvidia.com, 85. blogs.nvidia.com, 86. blogs.nvidia.com, 87. blogs.nvidia.com, 88. blogs.nvidia.com, 89. blogs.nvidia.com, 90. blogs.nvidia.com, 91. blogs.nvidia.com, 92. siliconangle.com, 93. siliconangle.com, 94. siliconangle.com, 95. siliconangle.com, 96. siliconangle.com, 97. siliconangle.com, 98. siliconangle.com, 99. siliconangle.com, 100. www.wired.com, 101. www.wired.com, 102. www.wired.com, 103. www.wired.com, 104. www.wired.com, 105. www.wired.com, 106. www.wired.com, 107. www.wired.com, 108. www.reuters.com, 109. www.reuters.com, 110. www.reuters.com, 111. www.tomshardware.com, 112. www.wired.com, 113. www.tomshardware.com, 114. www.tomshardware.com, 115. www.tomshardware.com, 116. www.tomshardware.com, 117. www.tomshardware.com, 118. www.tomshardware.com, 119. www.tomshardware.com, 120. www.tomshardware.com, 121. www.tomshardware.com, 122. www.tomshardware.com, 123. www.tomshardware.com, 124. ts2.tech, 125. ts2.tech, 126. www.wired.com, 127. www.wired.com, 128. www.tomshardware.com, 129. www.tomshardware.com, 130. www.reuters.com, 131. www.reuters.com, 132. blogs.nvidia.com, 133. blogs.nvidia.com, 134. siliconangle.com, 135. siliconangle.com, 136. ts2.tech, 137. ts2.tech, 138. nvidianews.nvidia.com

RTX Stock Defies Market Slump as Engine ‘Game-Changer’ and Defense Deals Fuel Rally
Previous Story

RTX Stock Defies Market Slump as Engine ‘Game-Changer’ and Defense Deals Fuel Rally

Oracle Stock Surges on $300B AI Cloud Deal – Is a Trillion-Dollar Valuation Next?
Next Story

Oracle’s AI Cloud Blitz: $300B OpenAI Deal, Meta Pact, and Supercomputer Unveiled

Stock Market Today

  • Nixon Peabody Dumps 25,734 GD Shares for $8.11M, 0.75% AUM
    October 18, 2025, 2:05 AM EDT. Nixon Peabody Trust Company sold 25,734 shares of General Dynamics (GD) for about $8.11 million, trimming the position to 0.75% of its 13F AUM as of Q3 2025. The post-trade holding stands at 30,224 shares worth $10.31 million, with the stake no longer among the fund's top five. On Oct. 17, 2025, GD traded near $331.15 per share, up 7.4% YTD but behind the S&P 500 by about 3.2 percentage points. GD is a defense contractor spanning Aerospace, Marine, Combat Systems, and Technologies. The move underscores Nixon Peabody's cautious stance on a name that once represented a larger slice of its portfolio.
  • Nixon Peabody Dumps 25,734 GD Shares for $8.1M; Stake Falls to 0.75% of AUM
    October 18, 2025, 2:03 AM EDT. Nixon Peabody Trust Company disclosed in an SEC filing dated October 17, 2025 that it sold 25,734 shares of General Dynamics (GD) for an estimated $8.11 million in Q3 2025. After the trade the fund holds 30,224 GD shares valued at about $10.31 million, representing roughly 0.75% of AUM as of September 30, 2025 (down from 1.26% in Q2 2025). The position remains outside the fund's top five holdings. GD traded around $331.15 on Oct 17, 2025, up 7.4% on the year but lagging the S&P 500 by about 3.2 percentage points over the same period. Post-trade top holdings include IDEV, MSFT, AVLV, AAPL and NVDA.
  • Nixon Peabody Trust Ditches 25,734 General Dynamics Shares in $8.11 Million Sell, AUM Share Drops to 0.75%
    October 18, 2025, 2:01 AM EDT. Nixon Peabody Trust Company disclosed in an SEC filing that it sold 25,734 shares of General Dynamics (GD) for an estimated $8.11 million in Q3 2025. Post-trade, the fund holds 30,224 GD shares valued at about $10.31 million, making up 0.75% of AUM (down from 1.26% in Q2 2025). The size of the stake is no longer among the fund's top five holdings. As of Oct 17, 2025, GD traded around $331.15 per share, up 7.4% year-to-date but underperforming the S&P 500 by about 3.2 percentage points. Top holdings remain broad, with IDEV, MSFT, AVLV, AAPL, and NVDA leading the portfolio. General Dynamics operates in aerospace, defense, and IT services.
  • Nixon Peabody Dumps 25,734 General Dynamics Shares for $8.11M; Stake Shrinks to 0.75%
    October 18, 2025, 1:59 AM EDT. Nixon Peabody Trust Company disclosed in an Oct. 17, 2025 SEC filing that it sold 25,734 shares of General Dynamics (GD) for about $8.11 million, trimming the stake in Q3 2025. The fund now holds 30,224 GD shares worth roughly $10.31 million, about 0.75% of AUM as of September 30, 2025 (down from 1.26% in Q2). Post-trade, GD exits the fund's top five holdings, which remain led by IDEV, MSFT, AVLV, AAPL, and NVDA. GD traded at about $331.15 on Oct. 17, 2025, up 7.4% year-to-date, yet lagging the S&P 500 by roughly 3.2 percentage points over the same period. General Dynamics remains a diversified defense contractor spanning Aerospace, Marine Systems, Combat Systems, and Technologies.
  • Nixon Peabody Trust Company Dumps 25,734 General Dynamics Shares for $8.11M
    October 18, 2025, 1:57 AM EDT. In a recent SEC filing dated October 17, 2025, Nixon Peabody Trust Company reduced its stake in General Dynamics (GD) by 25,734 shares, an estimated $8.11 million transaction based on the Q3 2025 average price. Post-trade, the fund holds 30,224 GD shares, worth about $10.31 million, and the position now represents 0.75% of AUM, down from 1.26% in Q2 2025. Despite the trim, GD remains a minor component of the fund's portfolio, not in the top five holdings, with top positions led by IDEV, MSFT, AVLV, AAPL, and NVDA as of Sept. 30, 2025. GD traded around $331.15 on Oct 17, 2025, up 7.4% year-to-date but lagging the S&P 500 by about 3.2 percentage points over that period.
Go toTop