Tech Shockwave: Major Tech Breakthroughs from Sept 18–19, 2025

AI News Today, November 22, 2025: Nvidia’s Mega Earnings, Regulation Battles, Classroom Chatbots and Cosmic Simulations

Artificial intelligence didn’t take the weekend off. On November 22, 2025 (22.11.2025), AI is reshaping chip markets, igniting political fights in Washington and Brussels, stepping into classrooms in Athens, helping patients fight insurers — and even modeling every star in the Milky Way.

Here’s a detailed, Google‑News‑ready rundown of the biggest AI stories you need to know today, plus what they mean for the months ahead.


1. AI Chips and Hardware: From Nvidia’s Record Quarter to New Data‑Center Alliances

Nvidia’s blowout quarter keeps the “AI bubble” debate alive

Chip giant Nvidia posted another monster quarter, reporting $57 billion in sales over just three months, driven by near‑insatiable demand for AI accelerators used in data centers and model training.  [1]

The results briefly calmed fears of an AI stock bubble, lifting sentiment after a days‑long tech sell‑off. But the relief rally was short‑lived: Nvidia’s own shares slipped again as investors wrestled with one big question — is AI a durable economic shift or a classic bubble with very expensive hardware?  [2]

Analysts and executives are split:

  • Optimists argue this is a once‑in‑a‑generation platform shift, comparable to the arrival of the internet or electrification, with AI spending projected to reach about $375 billion this year and roughly $500 billion by 2026[3]
  • Skeptics warn that AI’s real‑world revenue still lags far behind the hundreds of billions going into chips and data centers, raising the risk of overcapacity and painful write‑downs later.  [4]

For now, one fact is hard to ignore: AI capex is already estimated to have added around 0.5 percentage points to U.S. GDP growth in the first half of 2025, roughly a third of all economic expansion, underscoring how tightly the wider economy is now tied to AI infrastructure.  [5]

Meanwhile, high‑profile investors like billionaire fund manager Bill Ackman are reshuffling their AI portfolios, selling stakes in headline AI names even as Wall Street strategists argue those moves could be premature.  [6]The message from markets today: AI is still hot, but it’s no longer a one‑way trade.


Hon Hai + OpenAI: AI hardware manufacturing heads to Ohio

In the industrial world, Hon Hai Precision Industry (Foxconn) announced progress on a major partnership with OpenAI to build next‑generation AI infrastructure hardware in the United States.  [7]

Key points from the company’s update:

  • R&D for the new hardware will start in San Jose, California, before production shifts to an existing Hon Hai complex in Ohio, already repurposed from electric‑vehicle manufacturing to AI data‑center development.  [8]
  • The deal focuses on data‑center components and cooling systems designed specifically for large AI workloads, aiming to strengthen and simplify the U.S. AI hardware supply chain.  [9]
  • OpenAI will get early access to evaluate the systems, with options — but not obligations — to purchase.  [10]

Combined with Hon Hai’s separate plan to build a Nvidia GB300‑powered AI data center in Taiwan by mid‑2026, the company is positioning itself as a core manufacturing partner in the AI era, from the U.S. Midwest to East Asia.  [11]


Huawei’s Flex:ai pushes for software‑defined efficiency

In China, Huawei unveiled Flex:ai, an open‑source orchestration platform that promises to boost utilization of AI chips — GPUs, NPUs and other accelerators — by roughly 30%, according to the company.  [12]

What Flex:ai does:

  • Slices a single GPU or NPU into multiple virtual compute units, allowing several AI workloads or experiments to share one physical card.  [13]
  • Uses Kubernetes to dynamically schedule AI jobs across hardware, squeezing more performance out of existing chips.  [14]
  • Was co‑developed with researchers at Shanghai Jiao Tong University, Xian Jiaotong University and Xiamen University, reflecting tight academic–industry collaboration.  [15]

Flex:ai is widely seen as part of Beijing’s strategy to work around Western chip export controls, leaning on software and systems engineering to offset limited access to cutting‑edge semiconductor nodes.  [16]


AI even moves the metals market

It’s not just chips. The tin market is also reacting to AI demand. A new analysis notes that tin prices are stabilizing and starting to rebound as AI‑driven data‑center growth collides with tightening supply, increasing the strategic value of exploration projects in Africa.  [17]

Tin is a critical material for solder and advanced packaging. The takeaway: as AI infrastructure scales, expect more obscure raw materials to suddenly become geopolitically important.


2. Governments Wrestle With AI Regulation — From Washington to Brussels

A U.S. push to ban state‑level AI laws runs into trouble

On Capitol Hill, a controversial proposal to ban U.S. states from passing most new AI regulations is struggling to gain traction, despite a late push from President Donald Trump to attach the measure to this year’s must‑pass National Defense Authorization Act (NDAA).  [18]

  • The plan would preempt state AI laws, effectively centralizing power over AI regulation in Washington and nullifying many emerging state rules on deepfakes, automated decision‑making and online safety.  [19]
  • Senior lawmakers from both parties have pushed back, describing the proposal as a “giveaway to Big Tech” that weakens states’ rights and bypasses normal legislative scrutiny.  [20]
  • A leaked draft executive order that might have tried to strong‑arm states by threatening funding or lawsuits has reportedly been shelved for now.  [21]

The clash comes as more than half of U.S. states have enacted some form of AI legislation in 2025, ranging from transparency rules to sector‑specific health‑care protections.  [22]


TIME op‑ed: Blocking AI regulation “endangers kids”

Adding fuel to the debate, a TIME opinion piece today argues that efforts to preempt state AI rules would “endanger American kids.” The article points to a recent congressional hearing on chatbot harms and a string of troubling incidents involving AI systems interacting inappropriately with minors.  [23]

The author’s core arguments, in summary:

  • State‑level guardrails have become a first line of defense against unsafe chatbots and AI products aimed at children.
  • Polls show broad bipartisan public support for “reasonable AI guardrails,” making blanket preemption both unpopular and risky.  [24]
  • Without any comprehensive federal AI law yet in place, stripping states of power would effectively lock in a regulatory vacuum just as AI products scale into classrooms, toys and social platforms.  [25]

While it’s an opinion, not neutral reporting, the op‑ed reflects growing frustration among safety advocates who see the current AI boom outpacing the rules meant to keep people — especially kids — safe.


EU delays “high‑risk” AI rules to 2027

Across the Atlantic, Brussels is re‑tuning its own AI rulebook. The European Commission this week proposed a “Digital Omnibus” package that would delay enforcement of strict “high‑risk” AI rules from August 2026 to December 2027, after intense lobbying from major tech companies.  [26]

The delay affects high‑risk applications such as:

  • AI for biometric identification and surveillance
  • AI used in job applications and exams
  • Systems used in health services, utilities and credit scoring  [27]

At the same time, proposed tweaks to privacy rules could allow companies like Google, Meta and OpenAI to use more European personal data to train AI models, under certain conditions.  [28]

EU officials insist that “simplification is not deregulation,” but critics worry the bloc is slowly softening its much‑touted AI Act just as large models and foundation systems become deeply embedded in daily life.  [29]


3. Kids, Classrooms and AI: Toys, Teachers and Safety Fears

Advocacy groups warn parents away from AI toys this holiday season

With the holiday shopping season underway, children’s advocacy groups are sounding the alarm on AI‑powered toysmarketed to kids as young as two years old[30]

In a widely circulated advisory:

  • The group Fairplay, backed by more than 150 organizations and experts, warns that toys built on general‑purpose chatbots can discuss sexually explicit topics, suggest dangerous items like knives, and form unhealthy emotional dependencies, sometimes with limited parental controls.  [31]
  • U.S. PIRG’s latest “Trouble in Toyland” report says some AI toys explored dark or inappropriate content during tests, prompting at least one manufacturer to pull a product from the market.  [32]
  • Child development specialists quoted in the report argue that AI companions may displace imaginative play and reduce opportunities to practice creativity, language and problem‑solving with real people.  [33]

Toy makers, for their part, emphasize their safety filters and parental dashboards — but advocates still recommend low‑tech toys and human interaction as the safest bet for younger kids this year.  [34]


Greece launches an AI‑in‑the‑classroom pilot with OpenAI

In Greece, AI is heading into secondary schools in a far more structured way. The government has signed a deal with OpenAI to deploy a specialized version of ChatGPT, known as ChatGPT Edu, as part of an intensive training program for teachers.  [35]

Key details from today’s report:

  • Staff at 20 secondary schools will be trained this week on how to use the custom chatbot for lesson planning, research and personalized tutoring, with a national rollout planned for January.  [36]
  • Older students are expected to gain tightly monitored access next spring, making Greece one of the first countries in Europe to formally integrate generative AI into everyday classroom practice.  [37]
  • Officials frame the move as preparing citizens for an AI‑first economy, but some students and educators worry about creativity, critical thinking and screen addiction, especially in a country that is also moving to block social‑media access for under‑15s[38]

OpenAI has pledged to ensure “best practices for safe, effective classroom use,” while teacher unions and student groups debate whether Greece is pioneering the future of education — or turning pupils into test subjects for unproven technology.  [39]


4. AI in Health and Mental Health: From Insurance Appeals to New Research Institutes

“AI vs. AI”: Patients use bots to fight AI‑driven claim denials

In U.S. health care, a striking trend is emerging: patients are turning to AI to fight AI.

As insurers increasingly use automated systems and algorithms to deny claims or demand prior authorizations, startups and nonprofits are deploying their own AI tools to help patients push back.  [40]

A feature today highlights services such as:

  • Sheer Health, which uses AI plus human experts to scan insurance accounts, upload bills and explain complex benefits; paid tiers help patients challenge denials.  [41]
  • Counterforce Health, a North Carolina nonprofit that built an AI assistant to analyze denial letters, match them against policy language and medical research, and draft customized appeal letters — for free.  [42]

The article notes that:

  • Around a quarter of adults under 30 say they use AI chatbots at least once a month for health information.  [43]
  • More than a dozen U.S. states have already passed laws regulating AI in health care this year, as regulators try to keep up with automated claims systems.  [44]

Experts quoted in the piece warn that health care risks becoming a “robotic tug‑of‑war” between insurers’ AI and patients’ AI — with humans stuck in the middle and real medical needs on the line.  [45]

If you’re a patient, these tools can be useful for paperwork and language, but they are not a substitute for medical advice or legal counsel. The story underlines the need for strong oversight, clear appeal rights and human review of high‑stakes decisions.


New $20 million AI institute for mental health at Brown University

At the research level, Brown University formally launched a national institute on AI in mental and behavioral health, backed by a five‑year, $20 million grant from the U.S. National Science Foundation[46]

The AI Research Institute on Interaction for AI Assistants (ARIA) brings together computer scientists, psychologists, neuroscientists and clinicians from multiple universities. Its goals include:  [47]

  • Designing AI assistants that can interact with people in sensitive contexts — like mental health support — in ways that are safer, more trustworthy and more context‑aware.
  • Research on interpretability, to better understand how AI systems arrive at their responses.
  • Adaptability and personalization, so models respond appropriately to different users and situations.
  • Participatory design, bringing patients, clinicians and other stakeholders into the loop when building and evaluating AI systems.

Researchers are even exploring how to create something like a “Consumer Reports”‑style trustworthiness score for mental‑health AI tools, so clinicians and the public can compare systems more easily.  [48]

The institute’s launch is a reminder that, while commercial wellness chatbots are proliferating, there is a parallel push in academia to set evidence‑based standards — and to ensure AI augments, rather than replaces, professional care.


5. Breakthroughs at the Frontiers of Science and Robotics

AI simulates every star in the Milky Way

On the scientific frontier, researchers at RIKEN’s Center for Interdisciplinary Theoretical and Mathematical Sciences have unveiled what amounts to a digital twin of the Milky Way, simulating all 100 billion stars over 10,000 years of galactic time — something long considered computationally impossible.  [49]

The trick:

  • They used traditional high‑resolution physics simulations to model complex events like supernova explosions, then trained a deep‑learning surrogate model to predict how gas expands for tens of thousands of years after each blast.  [50]
  • This AI shortcut handles the ultra‑fast, small‑scale physics, freeing the main simulation to track the galaxy‑wide dynamics without needing decades of supercomputer time.  [51]

The team notes that the same approach could transform simulations in climate science, weather prediction and ocean dynamics, where models must juggle processes from microscopic to planetary scales.  [52]

In other words: AI isn’t just generating text and images — it’s becoming a core tool for scientific discovery, compressing what used to be decades of compute into more manageable workloads.


A soft AI patch that lets you control robots with gestures

At UC San Diego, engineers have developed a soft, wearable patch that wraps around the forearm and lets people control robots and devices with simple gestures — even while running, shaking or moving through turbulent environments.  [53]

The prototype, reported today, combines:  [54]

  • Stretchable motion and muscle sensors
  • A compact Bluetooth microcontroller and flexible battery
  • An on‑device deep‑learning model trained on noisy data from real‑world motion — including simulated ocean waves — to filter out “motion noise” and focus on the underlying gesture

When the wearer performs a gesture, the patch cleans the sensor data in real time and sends a command to a connected robot or machine.

Potential applications include:

  • Assistive robotics for people with mobility impairments
  • Hands‑free control for industrial workers and first responders in chaotic settings
  • Underwater robotics, where conventional controllers fail in rough conditions  [55]

It’s an early but vivid glimpse of how AI‑enhanced wearables could become the interface layer between humans and fleets of autonomous machines.


6. The Bigger Picture: Where Today’s AI News Is Pointing

Put together, today’s AI headlines sketch a clear trajectory:

  • AI is now core economic infrastructure. Nvidia’s earnings and the Hon Hai–OpenAI partnership show that data centers and accelerators are the new railroads and power plants — with supply chains stretching from Ohio to Taiwan and software like Huawei’s Flex:ai trying to wring every last FLOP out of constrained hardware.  [56]
  • Regulators are playing catch‑up — and arguing over who gets to regulate whom. The U.S. fight over state preemption, the EU’s delayed high‑risk rules and fierce debates about children’s safety all underline how fragmented AI governance remains.  [57]
  • AI is moving deeper into everyday life: into schools in Greecetoys in the living roominsurance appeals and mental‑health research labs. That raises huge questions about consent, oversight and the right balance between automation and human judgment.  [58]
  • At the frontier, AI is becoming a scientific instrument. From galaxy‑scale simulations to motion‑robust human–robot interfaces, AI is helping tackle problems that classical methods alone couldn’t practically address.  [59]

For policymakers, investors and everyday users, the signal from November 22, 2025 is clear:

AI is no longer a niche technology. It’s a contested layer of infrastructure, economics and social life — and the choices made now on safety, governance and access will echo for decades.

How to Use NVIDIA ChatRTX | AI Chatbot Using Your Files

References

1. abcnews.go.com, 2. abcnews.go.com, 3. abcnews.go.com, 4. abcnews.go.com, 5. abcnews.go.com, 6. www.fool.com, 7. focustaiwan.tw, 8. focustaiwan.tw, 9. focustaiwan.tw, 10. focustaiwan.tw, 11. focustaiwan.tw, 12. coincentral.com, 13. coincentral.com, 14. coincentral.com, 15. coincentral.com, 16. coincentral.com, 17. www.cruxinvestor.com, 18. www.washingtonexaminer.com, 19. www.washingtonexaminer.com, 20. www.washingtonexaminer.com, 21. www.washingtonexaminer.com, 22. www.ncsl.org, 23. time.com, 24. time.com, 25. time.com, 26. www.reuters.com, 27. www.reuters.com, 28. www.reuters.com, 29. www.reuters.com, 30. www.lockhaven.com, 31. www.lockhaven.com, 32. www.lockhaven.com, 33. www.lockhaven.com, 34. www.lockhaven.com, 35. www.theguardian.com, 36. www.theguardian.com, 37. www.theguardian.com, 38. www.theguardian.com, 39. www.theguardian.com, 40. www.northcarolinahealthnews.org, 41. www.northcarolinahealthnews.org, 42. www.northcarolinahealthnews.org, 43. www.northcarolinahealthnews.org, 44. www.northcarolinahealthnews.org, 45. www.northcarolinahealthnews.org, 46. www.brown.edu, 47. www.brown.edu, 48. www.brown.edu, 49. www.universetoday.com, 50. www.universetoday.com, 51. www.universetoday.com, 52. www.universetoday.com, 53. punemirror.com, 54. punemirror.com, 55. punemirror.com, 56. abcnews.go.com, 57. www.washingtonexaminer.com, 58. www.theguardian.com, 59. www.universetoday.com

Stock Market Today

  • Bitcoin Could Be The Culprit Behind This Week's Stock Market Reversal, Wall Street Says
    November 22, 2025, 12:46 PM EST. After Nvidia's blockbuster earnings powered a late-week rally, Wall Street faced a sharp reversal as Bitcoin slumped more than 30% from its highs. The rebound fueled by Nvidia and Walmart gave way to renewed concern about an AI slowdown, a hotter-than-expected September jobs report, and a Fed that remains hawkish, dimming near-term rate-cut hopes. Market veteran Ed Yardeni linked the bitcoin drop to pressure on TQQQ, the 3x Nasdaq-100 ETF, arguing the crypto rout spills into stocks via margin risk and liquidity. Others, like Steve Sosnick and Tom Lee, frame crypto as a leading indicator of risk appetite. In short, a Bitcoin selloff is seen by many as driving today's reversal and wider market volatility.
BigBear.ai (BBAI) Stock News Today, November 22, 2025: Wall Street Sticks to “Hold” as Defense AI Bets Collide With Valuation Jitters
Previous Story

BigBear.ai (BBAI) Stock News Today, November 22, 2025: Wall Street Sticks to “Hold” as Defense AI Bets Collide With Valuation Jitters

Federal Reserve News Today, November 22, 2025: Collins Pushes Back as Williams Revives December Rate‑Cut Bets
Next Story

Federal Reserve News Today, November 22, 2025: Collins Pushes Back as Williams Revives December Rate‑Cut Bets

Go toTop