- 6 Gigawatts of AI Chips: OpenAI and AMD announced a multi-year partnership for OpenAI to deploy 6 gigawatts of AMD’s AI GPUs across multiple future chip generations (starting with 1 GW of MI450 chips in 2026) – a colossal scale involving hundreds of thousands of processors [1]. AMD expects the deal to generate “tens of billions of dollars” in annual revenue from selling these chips to OpenAI [2].
- OpenAI’s 10% Stake Option: As part of the agreement, AMD granted OpenAI a warrant to purchase up to ~160 million AMD shares (around 10% of the company) at a mere $0.01 per share, vesting in tranches tied to chip delivery milestones and ambitious stock-price targets (culminating at $600/share) [3]. This unusual equity sweetener means OpenAI could eventually acquire a ~10% stake in AMD for just $1.6 million, if all targets are met [4]. AMD’s CFO noted the structure aligns incentives – OpenAI only gains that stake if the deployments succeed and AMD’s stock soars [5].
- Market Reaction – AMD Soars, Nvidia Slips: Investors cheered the OpenAI deal, sending AMD’s stock price up over 30% (its biggest one-day jump in 9 years) and adding roughly $80 billion to AMD’s market value [6]. In contrast, Nvidia’s stock dipped slightly on fears of a new formidable rival in AI chips [7]. AMD executives called the deal “transformative…for the dynamics of the industry” [8], as it positions AMD to challenge Nvidia’s dominance in artificial intelligence hardware.
- Diversifying Beyond Nvidia: OpenAI’s move to AMD reduces its heavy reliance on Nvidia, which today supplies ~90% of AI accelerator chips [9]. Earlier in 2025, OpenAI had inked a $100 billion deal with Nvidia to secure 10 GW of Nvidia systems (including next-gen “Vera Rubin” GPUs) [10]. By adding AMD as a “core strategic compute partner”, OpenAI is hedging its bets with multiple suppliers – including co-designing custom AI chips with Broadcom – to ensure it can get enough silicon to train ever-larger AI models [11].
- AI “Mega-Blob” of Entangled Alliances: The OpenAI–AMD pact is part of a broader AI industry mega-blob – a tangle of massive partnerships and cross-investments among tech giants, chipmakers, and investors [12]. OpenAI’s compute deals now span over a trillion dollars on paper, including Nvidia’s plan to invest up to $100 billion in OpenAI (with OpenAI buying Nvidia’s chips) [13] and Oracle’s stunning $300 billion cloud pact for OpenAI’s data centers [14]. Microsoft has poured in over $13 billion into OpenAI and integrates its AI into Office and Bing, while OpenAI is also backed by billions from firms like SoftBank and the UAE to fund giant “Stargate” AI supercomputers [15]. Critics warn this web of funding may resemble a “shell game” of circular investments [16], but proponents say it’s building an unprecedented foundation for AI advancement.
- High Stakes, Big Risks: This mega-deal gives AMD a foothold in AI, but AMD is effectively betting the house on OpenAI’s success. OpenAI’s warrant could dilute AMD’s shareholders by 10% if AMD’s stock hits stratospheric levels – a “high price” to pay for growth as analysts note [17]. There are execution risks too: AMD must deliver cutting-edge chips on time while Nvidia races ahead with next-gen GPUs that could outclass AMD’s by 2026 [18]. Supply chain bottlenecks (like shortages of advanced HBM memory for these GPUs) and U.S. export curbs on China are looming challenges [19]. Moreover, the energy footprint of a 6 GW AI supercluster – about the output of six nuclear reactors – could draw environmental and regulatory scrutiny [20]. If any part of this interlinked AI “mega-blob” stumbles, the shock could reverberate across the tech sector [21].
A $100 Billion+ Chip Deal: OpenAI’s Bet on 6 GW of AMD GPUs
OpenAI’s new partnership with AMD, announced on October 6, 2025, is staggering in scope – even by the outsized standards of the AI industry. OpenAI will buy up to 6 gigawatts of AMD’s Instinct data center GPUs over the next several years [22]. To put that in perspective, 6 GW of computing is on the order of hundreds of thousands of high-end chips, enough to fill multiple giant data centers devoted entirely to artificial intelligence workloads [23]. Sam Altman, OpenAI’s CEO, said AMD will be a “core strategic compute partner” providing the massive capacity needed to scale future AI models [24].
Under the deal, OpenAI committed to begin with 1 GW of AMD’s upcoming MI450 accelerators in late 2026 as the first tranche [25] [26]. The agreement then spans several GPU generations – likely including AMD’s MI470 and MI500 series later on [27]. In essence, OpenAI is locking in a long-term supply of cutting-edge chips from AMD to power everything from its next GPT models to advanced research like the new “Sora” AI video generator [28]. The goal is to ensure OpenAI isn’t bottlenecked by any single supplier or technology as it pushes toward artificial general intelligence.
Critically, AMD is not just selling chips for cash – it’s also giving OpenAI an equity stake. OpenAI received a warrant (a right to purchase shares in the future) allowing it to buy up to 160 million AMD shares at $0.01 each [29]. Those shares equate to roughly 10% of AMD. The catch: OpenAI can only exercise the warrant in stages if it meets performance milestones and AMD’s stock price reaches lofty targets (escalating up to $600 per share for the final batch) [30] [31]. For OpenAI, this is a potential windfall – if AMD’s stock explodes due to the AI boom, OpenAI would get a multi-billion dollar stake in AMD essentially for free [32]. For AMD, it was the cost of landing a marquee customer: effectively trading a slice of ownership for guaranteed revenue. “It aligns incentives,” argues AMD CFO Jean Hu – OpenAI only gains that stake if AMD’s chip sales and stock price truly skyrocket, which would presumably mean the partnership succeeded for both sides [33].
AMD expects the revenue from OpenAI’s orders to be enormous – on the order of tens of billions per year, with a total impact of over $100 billion in the coming four years when accounting for follow-on business [34]. AMD’s strategy chief noted OpenAI “has a lot of influence over the broader ecosystem,” so winning this deal could attract many other AI customers to consider AMD [35]. In the words of AMD’s data center head Forrest Norrod, this is a “transformative” agreement that could reshape industry dynamics in AMD’s favor [36]. It positions AMD as a true contender to lead the next phase of the AI revolution – a remarkable turnaround for a company that just a few years ago was an underdog in both CPUs and GPUs.
From OpenAI’s perspective, the AMD deal is about securing as much compute as possible to maintain its edge. “Compute capacity is our limiting factor” has been a common refrain in AI circles, and OpenAI is moving aggressively to remove that limitation. Altman has described the current period as “a phase of the build-out where the entire industry’s got to come together and everybody’s going to do super well” [37] – meaning partnerships like this AMD alliance (or the Nvidia and Oracle deals before it) are necessary to pool resources and build the gargantuan infrastructure AI demands. OpenAI’s aspiration to eventually achieve AGI (artificial general intelligence) drives an almost insatiable appetite for more chips and data centers. This AMD agreement, alongside others, signals that OpenAI sees massive compute as a strategic moat against competitors [38]. The company is even willing to undertake complex financial engineering – like taking equity in suppliers – to secure that moat.
AMD Stock Skyrockets as Investors Bet on an AI Windfall
News of the OpenAI–AMD mega-deal set off euphoria on Wall Street. AMD’s stock price surged 25–35% in a single day, an extraordinary jump for a $200+ billion company [39]. The rally lifted AMD shares to around $164–$165 (near all-time highs) and valued the company at roughly $267 billion [40]. By some measures, it was AMD’s biggest one-day gain in nearly a decade [41], reflecting how significant investors believe this deal is for AMD’s future. The stock was already up ~30% year-to-date amid 2025’s AI frenzy [42], and this news added fuel to that fire.
The market’s reaction wasn’t just knee-jerk hype; it was also grounded in what the deal symbolizes: that AMD might finally break Nvidia’s stranglehold on AI computing. Nvidia – whose GPUs currently account for ~90% of AI workloads [43] – saw its stock dip a few percent on the announcement [44]. To be clear, Nvidia is still an AI behemoth valued at over $1 trillion and recently reported record sales of its AI chips. But the OpenAI pact effectively anoints AMD as a credible second source for high-end AI silicon. “Investors interpreted the deal as validation of AMD’s competitiveness and evidence that Nvidia’s grip on AI accelerators could loosen,” Reuters reported [45]. For big cloud operators and AI startups alike, AMD is now on the map as an alternative to Nvidia’s GPUs – which could mean more business gravitating AMD’s way if it executes well.
AMD’s leadership is, unsurprisingly, exuberant. CEO Dr. Lisa Su framed the OpenAI partnership as a huge win-win: OpenAI gets the capacity to “push the boundaries of artificial intelligence,” while AMD expects tens of billions in new revenue, putting it “at the centre of the AI revolution” [46]. During a press call, Su emphasized the deal will be “highly accretive” to AMD’s earnings – implying it should significantly boost AMD’s profit in coming years [47]. AMD will also collaborate closely with OpenAI on chip design and software (like optimizing AMD’s ROCm software for OpenAI’s needs), potentially improving its products for other customers too [48]. This kind of deep co-design is something Nvidia has long done with its largest partners, so AMD matching that approach is another sign of its maturation [49].
Notably, this deal arrives just as AMD is rolling out a new generation of AI chips aimed at catching up with Nvidia’s top offerings. In June 2025, Lisa Su unveiled the MI300X GPU and teased the forthcoming MI400 series (including the MI450) along with a new “Helios” system platform, all designed to compete head-to-head with Nvidia’s highest-end H100 and upcoming products [50]. Those chips are expected to hit volume in 2024–2025. Major tech players like Meta, Oracle, and even OpenAI itself have already been testing AMD’s MI300X accelerators in pilot programs [51]. Winning the OpenAI contract strongly suggests AMD’s silicon and software stack proved impressive in those trials. Still, AMD has a long road to truly challenge Nvidia – Jensen Huang’s company enjoys a massive ecosystem lock-in via its CUDA software, years of developer loyalty, and sheer scale. At ~94% market share and with a ~$4 trillion market cap, Nvidia remains the “unquestioned leader” in AI hardware for now [52]. But AMD’s stock surge shows that investors are buying the narrative that Nvidia’s lead is not unassailable.
Wall Street analysts are busily updating their models and price targets for AMD in light of the OpenAI news. HSBC analysts, for example, raised their target price to $200 and predicted a “big reversal” in fortunes as AMD’s AI revenues ramp up fast (projecting over $8 billion in AI-related sales this year) [53]. Multiple investment banks have upgraded AMD or hiked targets – UBS to $210, Northland to $198 – citing the OpenAI deal as a game-changer for AMD’s growth trajectory [54]. There is, however, a note of caution among some analysts: even after the pullback from its intraday peak, AMD’s stock is priced for perfection, trading around 94 times earnings – a steep valuation that assumes explosive profit growth ahead [55]. Skeptics point out that Nvidia still has far higher profit margins and a more proven AI platform, so AMD will need flawless execution to justify the hype [56]. In other words, the OpenAI partnership opens a huge opportunity for AMD – but also sets a high bar that the company must meet in the next 1–2 years.
OpenAI’s Insatiable Demand: From “Stargate” Supercomputers to Global AI Clouds
For OpenAI, cutting a mega-deal with AMD is not about saving money – it’s about spending whatever it takes to secure enough computing power. In March 2025, CEO Sam Altman unveiled Project “Stargate,” a plan to build giant AI supercomputing centers (initially in the U.S.) aimed at delivering an unprecedented 1 gigawatt of computing by late 2026 [57]. That alone carries an estimated price tag of ~$100 billion for data centers, power, cooling and hardware. And that was before OpenAI started expanding the scope of its ambitions. Since then, OpenAI has launched a flurry of hardware initiatives: a $100 billion deal with Nvidia in September 2025 to provide 10 GW of Nvidia GPU-based systems [58]; a collaboration with Broadcom to co-design a custom AI chip tailored to OpenAI’s needs [59]; massive cloud-compute agreements with Oracle, such as a reported five-year $300 billion commitment for Oracle to supply 4.5 GW of cloud capacity to OpenAI [60]; and even exploratory projects with Microsoft and Google to tap their advanced chips (like Google’s TPUs) for certain tasks [61].
This multi-supplier strategy shows that OpenAI is desperate to avoid dependency on any single partner – a lesson perhaps learned from the GPU shortages that hit the AI industry in 2023–2024 when Nvidia’s H100 cards were scarce and precious. By signing on AMD, OpenAI now has at least two major GPU vendors in its stable, plus a custom silicon path via Broadcom, plus cloud deals with multiple giants. Altman has explicitly cast this as diversification for resilience: the AMD deal “does not change” OpenAI’s existing plans with Nvidia and Microsoft, a source told Reuters – it simply gives OpenAI more headroom and bargaining power in case of supply constraints or geopolitical export restrictions down the road [62].
OpenAI is also expanding globally, planning “sovereign AI” infrastructure in multiple countries [63]. The company has partnered with investors from different regions – for example, a sovereign fund from the United Arab Emirates is backing a new OpenAI data center in Abu Dhabi [64]. By building data centers overseas, OpenAI can offer AI services locally (complying with data sovereignty rules) and tap into local capital. Indeed, OpenAI’s need for capital is astounding: Reuters estimates the company may have to raise tens of billions of dollars more to finance its rapid hardware build-out [65]. There are reports that OpenAI is considering a corporate restructuring and new share offerings to bring in outside investors without disturbing its hybrid for-profit/non-profit governance model [66]. In October 2025, the company restructured its landmark deal with Microsoft, potentially to free up equity for other backers going forward [67].
All these moves underscore a reality of today’s AI race: the bottleneck isn’t ideas or talent – it’s compute. As OpenAI trains ever-larger models (GPT-5 is rumored to be in the works, as well as multi-modal and video AI systems), it requires exponentially more computing power. That in turn is reshaping the industry’s economics. OpenAI is no longer just a research lab; it’s becoming akin to a “critical infrastructure” provider [68], building and owning AI supercomputers around the world. Some observers have compared this moment to an arms race or the space race – except instead of nukes or rockets, the coveted asset is petaflops of AI processing might. OpenAI’s president Greg Brockman has talked about building out a “global AI infrastructure” in partnership with firms like AMD, emphasizing open standards and broad access to AI capabilities [69]. The upshot: to stay at the forefront of AI, OpenAI is securing every GPU it possibly can, even if that means spending unprecedented sums and entangling itself financially with its suppliers.
The Emerging AI “Mega-Blob”: An Industry Bound by Billions
This OpenAI–AMD alliance is just one strand in a much larger web that journalist Scott Rosenberg has dubbed the AI industry’s “mega-blob” [70]. The term captures how previously distinct entities – AI model developers, chipmakers, cloud providers, investors, even governments – are now locking arms in giant partnerships that blur the lines between customer, supplier, and shareholder. OpenAI sits at the center of this web. Consider the ties now binding it: Microsoft is OpenAI’s earliest investor ($13 billion in) and primary distribution partner (integrating ChatGPT into Windows, Office, Bing). Nvidia, ostensibly a vendor, has become an investor too – in September it agreed to invest up to $100 billion in OpenAI, money that OpenAI in turn will spend on Nvidia hardware [71]. Oracle isn’t just a cloud provider to OpenAI; OpenAI is effectively funding Oracle’s data centers with that $300 billion deal, and Oracle will pay OpenAI back for AI services over time [72]. Now AMD is both a supplier and (potentially) partly owned by OpenAI if the warrant is exercised.
The financial circularity is striking: Company A funds Company B, which then spends that money on A’s products [73]. This kind of arrangement can be seen as a virtuous alliance – or as a precarious house of cards. Gary Marcus, an AI critic, described the current situation as a “shell game” where money is shuffling around to prop up an AI boom without a clear sustainable model [74]. The Axios analysis notes that AI firms are increasingly behaving like one giant, “dollar-eating, energy-sucking entity” that justifies its vast costs by promises of future utopian AI breakthroughs [75]. And it’s not just companies – even the U.S. government is getting involved in the equity side. Through the CHIPS Act funding, for instance, the U.S. demanded an unusual 10% warrant in Intel for grants to build fabs [76], effectively making Uncle Sam a minority shareholder in a private chip company. We are witnessing an unprecedented public-private intertwining in the tech sector, all driven by the strategic importance of AI.
This entanglement carries systemic risks. The AI sector is now so concentrated and interdependent that if one major player falters, the pain could spread quickly. As Axios observed, OpenAI – flush with potential yet also spending enormous sums – is “propping up much of the AI industry, and the industry is propping up much of the U.S. economy” [77]. Should OpenAI stumble (say, a major product failure or a funding shortfall), “everyone else will be on the hook too” [78]. The fortunes of chipmakers (Nvidia, AMD, Broadcom), cloud providers (Microsoft, Amazon, Google, Oracle), and myriad AI startups are now deeply tied to OpenAI’s trajectory. This kind of mutual dependence is reminiscent of a bubble, skeptics warn – not unlike how intertwined financing and opaque bets led to the dot-com bust or the 2008 financial crisis [79]. In those cases, feedback loops of optimistic investment obscured underlying weaknesses until things abruptly collapsed.
Optimists counter that what’s different now is the reality of AI demand – ChatGPT’s explosive adoption, enterprise AI projects, and national AI initiatives suggest a more solid foundation than the froth of 1999. Indeed, one could argue these alliances are simply what it takes to build something as profound as superintelligent AI: it’s a moonshot-scale endeavor requiring cooperation among the world’s most valuable companies. Even rival AI labs are cross-pollinating: OpenAI’s competitor Anthropic has both Google and Amazon as major investors/partners, and interestingly Microsoft is using Anthropic’s AI in some products despite backing OpenAI [80]. Such is the weirdly coopetitive nature of the AI boom – everyone both competes and collaborates to avoid being left behind.
For everyday people and policymakers, the emergence of this AI mega-blob raises questions. Will these tech mega-deals lead to monopolies or oligopolies controlling AI? Governments have so far been supportive (the Biden administration sees domestic AI capacity as a strategic asset, hence funding chips and allowing deals that keep AI leadership in U.S. hands [81]). But there’s a fine line between productive partnership and anti-competitive collusion. If a few giant players collectively control AI compute and talent, it could stifle newcomers and concentrate power further. Regulators may eventually scrutinize arrangements like Nvidia essentially bankrolling its own customers, or OpenAI gaining influence over a chip supplier, for potential conflicts of interest. For now, though, the frenzy continues unchecked – with trillions of dollars of notional commitments being poured into building the future of AI.
Outlook: Golden Opportunities and Giant Challenges Ahead
The OpenAI–AMD pact underscores both the extraordinary promise and the daunting challenges of the AI compute race. For AMD, this is a once-in-a-generation opportunity to establish itself as an equal player to Nvidia in the highest-end market. If all goes well, by 2026–2027 AMD could be supplying a significant share of the world’s top AI data centers. Its revenues and profits would swell accordingly, potentially justifying the investors’ optimism that doubled AMD’s stock in 2025. AMD’s rise – from near-bankruptcy in 2015 to a $270 billion titan in 2025 – is already one of tech’s great comeback stories [82]. Succeeding in AI would crown that comeback, and CEO Lisa Su’s strategy of betting big on GPUs would be vindicated.
However, the execution risk for AMD is immense. The company must deliver new GPU architectures (the MI450 and beyond) that can meet or beat Nvidia’s performance per dollar – and do so on a tight timeline. Nvidia is not standing still; by 2026 Nvidia will be on its next-gen “Blackwell” GPUs (or the even newer “Rubin” chips referenced in deals) [83], which could leap ahead in speed or efficiency. If AMD’s chips underperform or arrive late, OpenAI could end up under-utilizing that 6 GW commitment – or even press to renegotiate. There’s also a software gap: much of the AI world runs on Nvidia’s CUDA platform, and while AMD’s ROCm is maturing, it still lacks the rich developer ecosystem of CUDA. OpenAI’s support will help, but converting AI researchers and engineers to an AMD-based stack is a gradual process.
Then there’s the supply chain and manufacturing challenge. Ramping up to six gigawatts of GPU compute is not as simple as flipping a switch. It requires huge volumes of cutting-edge silicon fab capacity (likely at TSMC for AMD’s chips) and advanced packaging for AI chips. Already, the industry faces shortages of High Bandwidth Memory (HBM) – the ultra-fast memory these GPUs need – and AMD’s MI300X/MI450 rely heavily on HBM from suppliers like Samsung and SK Hynix [84]. Any hiccup in those supplies could delay deliveries. AMD is reportedly exploring partnerships with Intel’s foundry services to get additional chip production capacity, an almost unthinkable collaboration between long-time rivals driven by the feverish demand for AI chips [85]. The geopolitical climate adds uncertainty: U.S. export controls might limit where OpenAI/AMD can deploy these chips (e.g., restrictions on selling top AI chips to China), and tensions over Taiwan’s semiconductor industry remain a background risk.
For OpenAI, the future will depend on how effectively it can turn all this raw compute power into breakthrough AI products – and revenue. Thus far, ChatGPT and its API services have been wildly popular but also expensive to run, and OpenAI’s profitability remains an open question given its heavy R&D spend. The company’s valuation has been soaring (reportedly $80–90 billion in private markets in 2025), predicated on the idea that it will monetize advanced AI at scale. With each new partnership, OpenAI is also taking on obligations – to use tens of billions of dollars of cloud credits from Oracle, to buy 6 GW of GPUs from AMD, to deliver returns for Microsoft, and so on. If consumer or enterprise demand for AI were to soften, OpenAI could find itself over-extended. But right now, demand shows no sign of slackening; if anything, AI capabilities are accelerating and igniting new applications (from AI copilots to generative media). OpenAI’s gamble is that investing now in maximal compute will cement a lasting leadership in AI that eventually yields AGI and world-changing products – an outcome that would indeed justify the colossal costs.
In the near term, one thing to watch is whether this AI investment boom sustains its momentum or hits turbulence. Some market watchers worry that AI has become a bit of a bubble, with every positive headline (like AMD’s deal) sparking FOMO-driven stock buying [86]. The enthusiasm has pushed tech valuations to extremes; any stumble – say a training failure, a regulatory crackdown, or simply a sales miss – could trigger a sharp correction. The circular funding loop in the AI sector means a negative shock could cascade: for instance, if OpenAI struggled, that could hurt its backers (Microsoft, NVidia, AMD, etc.), which in turn could spook investors further. On the other hand, there’s a strong argument that AI is a genuine paradigm shift – a new industrial revolution – and those don’t come cheap but do eventually pay off in transformative growth. As an executive at one investment firm put it, “AI remains a great place to invest”, with the OpenAI–AMD deal being just the latest proof that the big players are all-in for the long haul [87].
In summary, OpenAI’s blockbuster partnership with AMD marks a milestone in the AI arms race. It sends a clear signal that the era of multi-billion-dollar bets on AI infrastructure is here, and even industry giants will intertwine their fates to chase the AI opportunity. AMD gets a shot at challenging a rival 20 times its size. OpenAI secures more of the lifeblood (compute) it needs to push AI’s boundaries. The entire tech ecosystem watches on, because so much now rides on the success of these endeavors. As the AI “mega-blob” grows larger and more entangled, the potential rewards – and risks – only amplify. “The more entangled AI firms get with one another, the more likely any setback to one will turn into a calamity for all,” Axios warns [88]. In other words, this grand collaboration could either herald an age of astonishing AI breakthroughs – or, if things go awry, become a cautionary tale of hubris in the hype cycle. For now, though, the participants are forging ahead at full throttle, united by the belief that to make AI big, you have to build big – and this is about as big as it gets.
Sources: OpenAI/AMD press release and analysis [89] [90] [91]; Reuters and Yahoo Finance reporting [92] [93]; Axios “mega-blob” commentary [94] [95]; Tech industry analyses (TS2) [96] [97]; Real Investment Advice [98]; and others.
References
1. ts2.tech, 2. timesofindia.indiatimes.com, 3. ts2.tech, 4. ts2.tech, 5. ts2.tech, 6. timesofindia.indiatimes.com, 7. ts2.tech, 8. ts2.tech, 9. ts2.tech, 10. ts2.tech, 11. ts2.tech, 12. www.axios.com, 13. www.axios.com, 14. realinvestmentadvice.com, 15. www.axios.com, 16. www.axios.com, 17. www.fool.com, 18. ts2.tech, 19. ts2.tech, 20. ts2.tech, 21. www.axios.com, 22. ts2.tech, 23. ts2.tech, 24. ts2.tech, 25. ts2.tech, 26. ts2.tech, 27. ts2.tech, 28. ts2.tech, 29. ts2.tech, 30. ts2.tech, 31. ts2.tech, 32. ts2.tech, 33. ts2.tech, 34. ts2.tech, 35. ts2.tech, 36. ts2.tech, 37. www.axios.com, 38. ts2.tech, 39. timesofindia.indiatimes.com, 40. ts2.tech, 41. timesofindia.indiatimes.com, 42. ts2.tech, 43. ts2.tech, 44. ts2.tech, 45. ts2.tech, 46. ts2.tech, 47. ts2.tech, 48. ts2.tech, 49. ts2.tech, 50. ts2.tech, 51. ts2.tech, 52. ts2.tech, 53. ts2.tech, 54. ts2.tech, 55. ts2.tech, 56. ts2.tech, 57. ts2.tech, 58. ts2.tech, 59. ts2.tech, 60. realinvestmentadvice.com, 61. ts2.tech, 62. ts2.tech, 63. ts2.tech, 64. www.axios.com, 65. ts2.tech, 66. ts2.tech, 67. www.axios.com, 68. ts2.tech, 69. ts2.tech, 70. www.axios.com, 71. www.axios.com, 72. realinvestmentadvice.com, 73. www.axios.com, 74. www.axios.com, 75. www.axios.com, 76. www.axios.com, 77. www.axios.com, 78. www.axios.com, 79. www.axios.com, 80. www.axios.com, 81. www.axios.com, 82. ts2.tech, 83. ts2.tech, 84. ts2.tech, 85. ts2.tech, 86. ts2.tech, 87. x.com, 88. www.axios.com, 89. ts2.tech, 90. ts2.tech, 91. ts2.tech, 92. timesofindia.indiatimes.com, 93. ts2.tech, 94. www.axios.com, 95. www.axios.com, 96. ts2.tech, 97. ts2.tech, 98. realinvestmentadvice.com