AI Weekly: OpenAI’s ‘Code Red’, Mistral 3, Runway Gen‑4.5 and New AI Rules – What We Learned About Artificial Intelligence (Dec 1–7, 2025)

AI Weekly: OpenAI’s ‘Code Red’, Mistral 3, Runway Gen‑4.5 and New AI Rules – What We Learned About Artificial Intelligence (Dec 1–7, 2025)

From OpenAI sounding a “code red” alarm to governments floating AI Bills of Rights, this week in artificial intelligence was packed with big product launches, regulatory moves, and fresh research on AI’s climate footprint. Here’s a detailed, news‑style rundown of the most important AI developments between 1 and 7 December 2025—and why they matter.


1. OpenAI declares “code red” as Google’s Gemini 3 turns up the heat

OpenAI spent the week in crisis‑mode messaging.

According to an internal memo reported by The Guardian, CEO Sam Altman told staff that ChatGPT is in a “critical time” and declared a “code red” after Google’s new Gemini 3 model outperformed rivals on key benchmarks, threatening OpenAI’s dominance in consumer AI. The Guardian

Key points:

  • Altman warned employees that Gemini 3’s strong performance could create “temporary economic headwinds” for OpenAI and that external “vibes” could be rough as users and enterprises test Google’s latest model. The Guardian
  • OpenAI is reportedly reallocating internal resources to make ChatGPT faster, smarter, and more multimodal, with particular focus on reasoning and enterprise use cases.

At the same time, reporting from The Verge says OpenAI is preparing to ship GPT‑5.2, a major update to the model behind ChatGPT, as soon as next week—framed internally as part of that same “code red” response. The Verge

Why it matters:
This week underscored that the “frontier model race” is now a two‑horse sprint between Google and OpenAI, with each side using both PR and product releases to signal momentum. For businesses, it’s a reminder that:

  • Capabilities and price/performance are changing week to week, not year to year.
  • Model switching costs are falling, especially as APIs converge and tools like vector databases and orchestration frameworks support multiple providers.

2. New frontier models: Mistral 3 and Runway Gen‑4.5 raise the bar

Mistral 3: Open, multimodal and production‑ready

French startup Mistral AI launched Mistral 3, a new family of open‑weight models that aim squarely at both Big Tech’s proprietary models and the wider open‑source ecosystem. Mistral AI

From the company’s technical announcement:

  • Model lineup: three “Ministral 3” dense models (3B, 8B, 14B parameters) plus Mistral Large 3, a sparse Mixture‑of‑Experts model trained with 41B active and 675B total parameters, all under Apache 2.0 licensing. Mistral AI
  • Multimodal & multilingual: native support for text + image understanding and strong performance across 40+ languages, with Large 3 debuting near the top of major open‑source leaderboards. Mistral AI
  • Edge and offline focus: TechCrunch reports that Ministral models can run on a single GPU and are explicitly targeted at laptops, robots, drones and other edge devices, giving enterprises more control over data residency and latency. TechCrunch
  • Cloud distribution: The full family is already available via Azure Foundry, Amazon Bedrock, IBM WatsonX, Modal, Hugging Face and others, making it unusually easy to adopt on Day 1. Mistral AI

Why it matters:
Mistral 3 is one of the strongest arguments yet that open‑weight frontier models can compete with closed models on quality while offering better transparency, auditability and customisation. Expect more governments and regulated industries to treat models like Mistral 3 as viable “default choices” when data governance is non‑negotiable.


Runway Gen‑4.5: AI video leaps ahead of Sora 2 and Veo 3

On the generative media side, Runway rolled out Gen‑4.5, its latest text‑to‑video AI model—and it quickly grabbed the top spot in several independent benchmarks.

Highlights:

  • Runway says Gen‑4.5 delivers “cinematic and highly realistic outputs” and claims “unprecedented physical accuracy and visual precision” in motion, fluids and object interactions. The Verge
  • On the Artificial Analysis / Video Arena leaderboard, Gen‑4.5 currently sits at #1 with 1,247 Elo, beating Google’s Veo 3 and OpenAI’s Sora 2 Pro in blind tests. The Tech Buzz
  • The model is tuned for controllability (camera moves, duration, styles) while maintaining similar latency and cost to its predecessor Gen‑4. The Verge

Why it matters:
AI video is shifting from novelty clips to production‑ready footage. For creators and brands, this week’s results suggest:

  • Short‑form ads, animatics and concept visuals can be done in‑house, fast, with quality good enough for many campaigns.
  • Legal and ethical questions around synthetic video—labelling, consent, and deepfake controls—will get more urgent as “AI vs real” becomes harder to spot.

3. Big Tech power plays: Apple reshuffles AI leadership, Meta returns to news, EU probes WhatsApp

Apple names a new VP of AI

Apple, widely seen as a late mover in AI, named Amar Subramanya as its new Vice President of AI, replacing long‑time AI leader John Giannandrea. Reuters

According to Reuters:

  • Subramanya joins from Microsoft, where he was a corporate VP of AI, and previously spent 16 years at Google leading engineering for the Gemini assistant. Reuters
  • He will oversee foundation models and ML research, reporting to software chief Craig Federighi, while Giannandrea will stay on as an adviser until his planned 2026 retirement. Reuters

This is Apple signalling that its long‑teased next‑generation Siri and on‑device AI features are a strategic priority, even if major upgrades are not expected to land until 2026.


Meta signs AI licensing deals with major news publishers

Meta spent much of 2023–24 backing away from news. This week, it did the opposite.

The company announced new AI data licensing deals with outlets including USA Today, CNN, Fox News, People Inc., The Daily Caller, The Washington Examiner and Le Monde, enabling its Meta AI chatbot to deliver “real‑time” news answers via links to partner content. Reuters

Meta says users asking news‑related questions will now see more diverse, timely sources in AI responses, as it races to keep up with ChatGPT, Gemini and Perplexity. Reuters


At the same time, EU opens an antitrust investigation into WhatsApp AI rules

Just as Meta was touting new media deals, the European Commission launched a formal antitrust inquiry into whether Meta is unfairly restricting third‑party AI services from integrating with the WhatsApp Business Solution. Pymnts

According to Competition Policy International:

  • A new Meta policy reportedly bars AI‑based services from using WhatsApp Business if AI is “central” to their product, while Meta’s own AI assistants remain allowed.
  • EU officials worry this could shut out competing AI chatbots from one of Europe’s most important messaging platforms, in violation of dominance and Digital Markets Act rules. Pymnts

Why it matters:
Within one week, Meta both bought legitimacy via licensing news content and invited more scrutiny for allegedly gatekeeping AI access on WhatsApp. The broader trend is clear: regulators increasingly see messaging apps and AI assistants as critical “chokepoints” in digital markets.


4. Governments roll out AI strategies, agentic tools and “Bills of Rights”

US health agencies double down on AI

Two major US health regulators made AI moves this week:

  1. HHS (U.S. Department of Health and Human Services) released a strategy to expand its use of AI across public‑health programs, emphasising responsible deployment, bias mitigation and transparency. AP News
  2. The FDA announced a government‑wide deployment of an “agentic AI” platform for internal staff, building on its earlier LLM assistant, Elsa. U.S. Food and Drug Administration

The FDA says the new system will let employees build AI workflows to help with multistep tasks: meeting prep, pre‑market reviews, post‑market surveillance, inspections and administrative work. The tools run in a high‑security GovCloud environment and do not train on sensitive regulatory data, addressing concerns from industry about data reuse. U.S. Food and Drug Administration


AI Civil Rights Act reintroduced in Congress

On Capitol Hill, Rep. Ayanna Pressley and Sen. Ed Markey, joined by other Democrats, reintroduced the Artificial Intelligence (AI) Civil Rights Act. Ayanna Pressley

The bill would:

  • Require testing and auditing of AI systems used in high‑stakes decisions (jobs, housing, credit, education, health care).
  • Prohibit discriminatory or biased automated decision‑making, and increase transparency around algorithmic use. Ayanna Pressley
  • Give regulators and affected individuals clearer tools to hold companies accountable when AI systems “supercharge bias” and civil‑rights violations. Ayanna Pressley

The proposal is strongly backed by civil‑rights groups, labour unions and digital‑rights NGOs, who argue that many AI tools “bake in” historical discrimination found in underlying data. Ayanna Pressley


Florida’s proposed “AI Bill of Rights”

In a parallel state‑level move, Florida Governor Ron DeSantis proposed a state “AI Bill of Rights” package that would: Pymnts

  • Ban using a person’s name, image and likeness in AI systems without consent.
  • Require clear disclosure when people interact with AI (for example, a chatbot).
  • Restrict the sale of personally identifying information used for AI systems.
  • Limit certain uses of AI in insurance and create parental controls for AI chatbots used by minors. Pymnts

It also includes restrictions on data‑center development (costs, land use, foreign ownership), reflecting growing local pushback against AI’s physical and environmental footprint. Pymnts

Why it matters:
This week showed a clear pattern: AI lawmaking is fragmenting across levels of government. Federal regulators are deploying AI internally and pushing sector‑specific rules, while Congress and states propose more general rights‑based frameworks. For organisations deploying AI, compliance will increasingly mean navigating overlapping obligations rather than one unified standard.


5. Safety and risk: AI labs get a failing grade, MIT expands risk mapping

AI Safety Index: Big labs “far short” of global standards

A new edition of the AI Safety Index, produced by the Future of Life Institute, concluded that the safety policies of major AI labs—OpenAI, Anthropic, xAI, Meta and others—remain “far short of emerging global standards.” Reuters

Reuters reports that:

  • No major company was found to have a credible plan for controlling superintelligent systems, despite many actively pursuing such capabilities. Reuters
  • The report highlights gaps in model evaluations, incident reporting, compute governance and commitments to slow or pause scaling if risks rise. Reuters

Companies pushed back, arguing they invest heavily in safety and share research to raise standards, but the index will feed ongoing debates about whether voluntary frameworks are enough.


MIT’s AI Risk Repository gets a major upgrade

Separately, MIT’s AI Risk Repository—a structured database of AI risk frameworks and taxonomies—released Version 4, adding nine new frameworks and about 200 new risk categories, for a total of over 1,700 coded risks. AI Risk Repository

The update integrates work on:

  • Risks from agentic and embodied AI (robots, drones, other physical systems).
  • Taxonomies of LLM failures and misuse, including jailbreaks, disinformation, privacy leaks and economic harms. AI Risk Repository

Why it matters:
Taken together, the Safety Index and MIT repository show two sides of AI governance:

  • Civil society and academics are building increasingly detailed maps of what can go wrong.
  • Major labs still lag in adopting those maps as binding constraints on their own behaviour.

Expect these documents to be cited in future regulation, corporate risk registers and even shareholder resolutions.


6. AI and policing: Canadian city trials facial recognition on body cams

In one of the week’s most controversial deployments, police in Edmonton, Canada began piloting AI‑powered facial recognition on body cameras supplied by Axon. AccessWdun

According to the Associated Press:

  • The system compares faces captured by officers’ cameras against a “watch list” of wanted or high‑risk individuals.
  • The pilot currently runs only in daylight hours and involves around 50 officers; matches are reviewed later at the station rather than in real time. AccessWdun
  • Civil‑liberties advocates warn that body‑cam facial recognition has historically shown racial and gender bias, and worry about “mission creep” from limited pilots to broad public‑space surveillance. AccessWdun

This comes after years in which tech companies paused selling real‑time face‑recognition tools to police due to bias and privacy concerns, and after several US states and cities explicitly restricted such use. AccessWdun

Why it matters:
AI is moving from analysis of stored footage to live operational policing, raising hard questions about due process, consent and error rates. Even if the tech is more accurate now, the governance and oversight structures in many jurisdictions are still playing catch‑up.


7. Copyright and training data: The New York Times sues Perplexity AI

Legal battles over AI training data escalated again this week as The New York Times filed a lawsuit against Perplexity AI in federal court. Reuters

The suit alleges that Perplexity:

  • Copied, distributed and displayed millions of NYT articles without permission to train and operate its AI tools, including paywalled material.
  • Generated fabricated content (“hallucinations”) while using NYT trademarks and styling, potentially misleading users and harming the brand. Reuters

Perplexity says it indexes public web pages rather than scraping to build foundation models, but it faces similar lawsuits from other publishers and reference works.

At the same time, Meta’s new licensing deals with news outlets, and previous deals between OpenAI and publishers like the Wall Street Journal and Financial Times, highlight an emerging split:

  • Some AI companies are paying to license news and reference content.
  • Others rely on fair‑use arguments and public scraping, inviting litigation.

Why it matters:
The outcome of NYT v. Perplexity will shape how copyright law applies to AI training and AI‑native search products. If courts side strongly with publishers, future foundation models may rely more on licensing, synthetic data or user‑provided corpora, and less on unconsented scraping.


8. Climate impact: New research suggests AI’s footprint is smaller than feared

A widely shared article this week reported that AI’s climate impact may be much smaller than many people assume, based on research from the University of Waterloo and Georgia Tech. ScienceDaily

Key findings from the study and related coverage:

  • AI’s total energy use and greenhouse‑gas emissions are currently a tiny fraction of global totals, roughly comparable to the electricity consumption of a small country. University of Waterloo
  • However, AI can double local energy demand around major data‑center hubs, stressing regional grids and water resources. University of Waterloo
  • The authors argue AI could actually support climate and economic progress if used to optimise energy systems, materials and logistics—provided those local impacts are managed. University of Waterloo

Why it matters:
The research doesn’t give AI a free pass, but it does challenge the narrative that “AI is worse than aviation” for the climate. For policymakers and sustainability teams, the right framing is “local intensity, global modesty”:

  • Focus on where data centres are built, how they’re powered and how they use water.
  • Use AI itself to unlock efficiency and decarbonisation, while setting guardrails on unconstrained compute growth.

9. Business sentiment: Between automation fears and productivity optimism

Two high‑profile CEOs weighed in on AI’s impact on work and productivity:

  • Nvidia CEO Jensen Huang reportedly told employees to use AI “as much as possible,” implying that those who ignore it risk being left behind—a sentiment captured in coverage emphasising how deeply Nvidia is integrating AI into its own workflows. TechRadar
  • In an interview highlighted by Fortune, JPMorgan CEO Jamie Dimon reiterated his view that AI could eventually shorten the workweek, even as he warned that many roles will change or disappear and that retraining is essential. Fortune

Why it matters:
The tone from top executives is converging on two ideas:

  1. Adopt AI internally or fall behind your competitors.
  2. Long‑term, AI may enable less routine work and more leisure—but only if reskilling, social safety nets and productivity gains are shared broadly.

For workers and managers, this translates into very practical advice: build AI literacy now, especially around prompting, data hygiene, and workflow automation, rather than waiting for “fully baked” solutions.


10. What this week tells us about where AI is headed

Taken together, the AI news from 1–7 December 2025 points to a few clear themes:

  1. Competition is intensifying at the top.
    OpenAI’s “code red” and Google’s Gemini 3 rivalry are pushing faster releases, while open‑weight challengers like Mistral 3 show that the future won’t be purely closed‑source. The Guardian
  2. Video and multimodal AI are maturing fast.
    Runway’s Gen‑4.5 and Sora/Veo competition indicate that high‑fidelity, controllable AI video is starting to look like a standard capability, not a niche experiment. The Verge
  3. Regulation is shifting from abstract principles to concrete rules.
    From the AI Civil Rights Act and Florida’s AI Bill of Rights to EU antitrust probes and Google’s antitrust remedies, AI is now central to competition law, data protection and civil‑rights law, not just tech policy. PYMNTS.com
  4. Safety and governance expectations are rising.
    The AI Safety Index, MIT’s expanded risk taxonomy and growing scrutiny of police and surveillance deployments show that “move fast and break things” is no longer acceptable for high‑impact AI systems. AccessWdun
  5. The narrative on AI and climate is becoming more nuanced.
    Research suggests AI’s global climate impact is currently modest but locally intense, shifting the policy conversation from outright bans to better siting, cleaner energy and smarter efficiency gains. ScienceDaily

AI is now touching every layer of the stack: chips, models, media, law, climate, labour and policing. If this week is any indication, the coming months will be less about asking whether AI will transform industries and more about deciding on whose terms that transformation happens.

Stock Market Today

  • XRF Scientific (ASX:XRF) posts 758% five-year TSR; dividends drive returns
    January 17, 2026, 8:55 PM EST. Investors in XRF Scientific (ASX:XRF) have seen standout returns over five years. The stock's price has risen about 641%, while the broader five-year total shareholder return (TSR) runs 758% when including reinvested dividends. The last 12 months delivered about 26% TSR, aided by dividends. In the most recent month, the share price gained 8.0% as market peers rose roughly 4.2%. Over five years, earnings per share (EPS) grew 26% per year, outpaced by a 49% annual rise in the share price, suggesting investors have priced in earnings growth and other factors. Dividends largely explain the TSR excess. The company has faced questions about whether growth will continue; some investors may feel they missed the opportunity, but the business could still be in play.
AI Models of the Week (Dec 1–7, 2025): OpenAI’s GPT‑5.2 ‘Code Red’, xAI’s Grok 4.20, and Google DeepMind’s Gemini 3 Deep Think
Previous Story

AI Models of the Week (Dec 1–7, 2025): OpenAI’s GPT‑5.2 ‘Code Red’, xAI’s Grok 4.20, and Google DeepMind’s Gemini 3 Deep Think

Amazon (AMZN) Stock: What to Know Before the Market Opens on December 8, 2025
Next Story

Amazon (AMZN) Stock: What to Know Before the Market Opens on December 8, 2025

Go toTop