LIM Center, Aleje Jerozolimskie 65/79, 00-697 Warsaw, Poland
+48 (22) 364 58 00

June 2025 AI News Roundup: Breakthroughs, Surprises, and Global Developments

June 2025 AI News Roundup: Breakthroughs, Surprises, and Global Developments

June 2025 AI News Roundup: Breakthroughs, Surprises, and Global Developments

June 2025 was a landmark month for artificial intelligence, with major advances and unexpected events across generative AI, robotics, healthcare, defense, regulation, and business. From next-generation AI models and robots entering factories to new healthcare AI tools and calls for oversight, the AI landscape saw rapid evolution. Below is a comprehensive news-style report summarizing the month’s key AI updates and surprises.

Generative AI Milestones and New Models

OpenAI signaled a new leap in generative AI with CEO Sam Altman announcing that GPT-5 is on the way. On an official podcast, Altman said GPT-5 is expected to launch in summer 2025 (no exact date yet) adweek.com. Early testers report it is “materially better” than GPT-4 adweek.com, suggesting a significant upgrade in capability. OpenAI is also exploring new monetization: Altman revealed he’s “not totally against” ads in ChatGPT, but stressed that altering a model’s answers based on advertiser payments would be “a trust-destroying momentadweek.com. Any ads would likely appear outside the chatbot’s answers to avoid compromising integrity, he noted.

Meanwhile, competition in generative AI is heating up on all fronts. Creative AI company Midjourney unveiled its first text-to-video generation system, Model V1, allowing users to create short animated clips from prompts crescendo.ai. Early users say the 16-second videos show advanced control over motion and style, putting Midjourney into contention with Runway and OpenAI’s own experimental video model (“Sora”) crescendo.ai. On the open-source side, China’s startup MiniMax launched its new M1 large model as an open-source challenger. MiniMax claims M1 achieves cutting-edge performance with far less computing power and has released it under an Apache 2.0 license to encourage broad use in industry winbuzzer.com. And while Meta’s next-gen model LLaMA 4 “Behemoth” has been delayed to late 2025 due to performance issues winbuzzer.com winbuzzer.com, the broader ecosystem is buzzing with new entrants and upgrades.

Robotics and Automation Breakthroughs

In June, AI took bold steps from the digital realm into the physical world. Google DeepMind introduced Gemini Robotics On-Device, a robotics AI model that runs locally on robots rather than in the cloud deepmind.google. This efficient vision-language-action model demonstrates general-purpose dexterity – following natural language instructions and performing complex tasks like unzipping bags and folding clothes entirely on-board the robot deepmind.google. By operating without an internet connection, it promises low latency and reliability for robots even in environments with poor connectivity. Google is providing a toolkit for developers to fine-tune this model on new tasks with as few as 50 demonstrations, pointing toward more adaptable and autonomous robots deepmind.google deepmind.google.

Robots are also entering factory lines. In a notable industry first, Taiwan’s Foxconn and U.S. chipmaker Nvidia announced plans to deploy humanoid robots on an electronics production line reuters.com. Sources say Foxconn’s new Houston plant, slated to start building Nvidia’s AI servers next year, will use human-like robots to assist in assembly and material handling. If finalized, this would mark the first time Nvidia products are built with humanoid robot assistance, and Foxconn’s first use of such robots in manufacturing reuters.com. The companies aim to have the robots working by Q1 2026, after training them to do tasks like picking up parts, inserting cables, and other routine assembly jobs reuters.com reuters.com. Observers call it a milestone that could transform manufacturing, as firms test whether bipedal or wheeled humanoid machines can augment or replace human factory labor.

On a lighter note, AI-driven robots even took to the sports court. Researchers in China unveiled a four-legged robot that can play badminton with humans, using vision and real-time AI decision-making to rally and return shots crescendo.ai. The robot’s ability to anticipate moves and adjust strategy showcases the growing sophistication of AI in physical tasks. While just a research demo, it hints at future human-robot collaboration in recreation, training, and beyond – a surprising glimpse of robots mastering agility and hand-eye coordination in dynamic environments crescendo.ai.

AI in Healthcare and Biotech

Healthcare saw significant AI deployments and guidelines in June. The U.S. Food and Drug Administration (FDA) launched its first in-house AI system, a large language model named “Elsa.” According to the FDA, Elsa is designed to summarize adverse drug event reports, compare product labels, and even generate computer code for drug data analysis – essentially helping regulators sift insights from vast medical datasets healthcare-brew.com. This marks the FDA’s inaugural step into using AI for its own operations, part of a broader strategy to embrace new technology for public health oversight.

Medical authorities also pushed for safer, more transparent AI. On June 11, the American Medical Association (AMA) adopted a new policy urging that clinical AI tools be “explainable.” The AMA wants developers to provide clinicians with clear information on an AI’s safety, efficacy, and workings healthcare-brew.com. The policy reflects growing concern that doctors need to understand AI recommendations – not just take algorithmic output at face value – to ensure patient safety. It’s a call for transparency as AI diagnostic and decision tools spread in hospitals.

In the private sector, healthcare companies expanded AI use for patient services. For example, insurance giant Cigna rolled out a virtual assistant chatbot on its app to help patients navigate insurance coverage and care options healthcare-brew.com. Digital health startups are also integrating AI: Hinge Health introduced an AI-powered platform to match patients with orthopedic specialists, and mental health app Wysa launched an “AI Gateway” chatbot to facilitate conversations between therapy providers and insurers healthcare-brew.com. These tools aim to streamline admin tasks and personalize care, illustrating AI’s growing role in patient experience.

AI is also accelerating drug discovery. In early June, MIT and biotech firm Recursion released Boltz-2, an open-source AI model to aid pharmaceutical research healthcare-brew.com. Boltz-2 predicts how well a proposed small-molecule drug will work, helping scientists design more effective medicines. By open-sourcing this tool, the researchers hope to speed up drug development across the industry. It’s one of several efforts where AI is being leveraged to analyze biochemical data and discover new treatments faster and at lower cost.

Defense, Security and AI Misuse

AI’s influence on national security and crime made headlines – in ways both strategic and troubling. In Eastern Europe’s conflict zone, reports emerged that Ukraine deployed AI-enhanced drone swarms in a covert operation against a high-value military target. The mission, dubbed “Operation Spider Web,” used semi-autonomous drones swarming a Russian long-range bomber, with each drone reportedly costing only about as much as an iPhone crescendo.ai. If confirmed, this indicates a new era of low-cost, AI-driven warfare, where relatively cheap autonomous weapons can challenge expensive military assets. Defense analysts noted this could reshape military tactics, as AI-guided drones enable precision strikes without risking pilots, potentially giving smaller forces an asymmetric advantage.

Western tech firms are also increasingly involved in defense. In fact, Meta (Facebook’s parent) is reportedly developing AI-powered augmented reality combat goggles for the U.S. military crescendo.ai. These specialized AR headsets would provide soldiers with live battlefield data and decision support via AI, merging consumer AR tech with military needs. The project – if it proceeds – underscores how Big Tech is venturing into defense tech, a move that blurs lines between Silicon Valley and the Pentagon. It also reflects the military’s growing interest in leveraging advanced AI (like computer vision and AR) to enhance soldiers’ situational awareness and communication in the field.

On the domestic security front, AI is being used to combat crime – and committing new ones. In Canada, a hospital in Nova Scotia installed an AI-powered weapon detection system at its entrances to prevent violence crescendo.ai. The system uses computer vision to spot firearms or knives on visitors in real time, enabling non-intrusive screening and quick alerts to security. This trial aims to boost hospital safety amid rising concerns about violence in healthcare settings, and it shows how AI can augment public safety without traditional metal detectors.

Conversely, a disturbing case in the U.S. illustrated AI’s dark side in crime: A 17-year-old boy became a victim of an “AI sextortion” scam. Criminals used AI to generate fake nude images of him and blackmailed him via text – a psychological attack that tragically led to the teen’s suicide in February. In June, following public outcry, U.S. lawmakers seized on the case to advance the “Take It Down Act,” a bill targeting the misuse of generative AI in blackmail and explicit content creation crescendo.ai. The proposed law would strengthen penalties for AI-assisted sexual extortion and improve processes for taking down AI-generated illicit images. This heartbreaking incident and the ensuing bipartisan action underscore the urgent need for safeguards against AI abuse in cybercrime.

Major Business Moves and Talent Wars in AI

The business of AI in June 2025 was defined by huge investments, talent shuffles, and strategic alliances, as companies jockey for leadership in the booming AI economy. Perhaps most eye-opening was Sam Altman’s accusation that Meta is trying to poach OpenAI’s top engineers with $100 million signing bonuses winbuzzer.com. In a mid-June interview, the OpenAI CEO claimed Meta (Facebook’s parent) has offered astronomical compensation packages to lure away his talent, following Meta’s well-publicized struggles to keep up in AI. “Meta thinks of us as their biggest competitor,” Altman remarked, noting that none of his “best people” have accepted such offers to his knowledge winbuzzer.com winbuzzer.com. The allegation lays bare the high-stakes talent war in AI, where experienced researchers are now arguably as valuable as star athletes, commanding contracts in the nine figures.

Meta’s aggressive recruitment coincides with its big spending to catch up technologically. In mid-June the company invested $14 billion for a 49% stake in Scale AI, a leading data-labeling firm winbuzzer.com. The deal, essentially buying half of a crucial AI data pipeline, was paired with installing Scale’s founder as head of a new Meta “superintelligence” team winbuzzer.com. It’s a bold bid to secure more training data and talent. However, this move had ripple effects – within a day, reports surfaced that Google (one of Scale’s largest customers) planned to cut ties, worried that Meta’s ownership would compromise Scale’s neutrality winbuzzer.com. The episode highlights how major AI labs now tussle over data and talent as strategic assets, even if it strains industry partnerships. It also underlines the skyrocketing costs: recent analyses estimate training frontier AI models now costs on the order of $100–200 million each winbuzzer.com. In this context, spending billions – or offering $100M to a single researcher – starts to look like a calculated (if breathtaking) bet on future dominance.

Amid this arms race, new players and partnerships emerged. Former OpenAI CTO Mira Murati made waves by raising a $2 billion funding round for her new venture, Thinking Machines Lab, at a whopping $10 billion valuation crescendo.ai. Backed by top VCs, the startup aims to build advanced agentic AI systems for autonomous reasoning and decision-making, indicating investor confidence in next-generation AI beyond chatbots. In the enterprise arena, India’s IT giant TCS partnered with Microsoft to train 100,000 employees in Azure OpenAI services and develop AI-first business solutions crescendo.ai. And in education tech, British publisher Pearson teamed up with Google to bring AI tutoring features into classrooms. The multi-year deal will use Google’s AI models to create personalized learning tools that adapt to each student’s needs, while helping teachers track progress reuters.com reuters.com. Pearson’s CEO said AI can replace one-size-fits-all lessons with tailored learning paths for every child reuters.com. These collaborations show how traditional industries are embracing AI through alliances, and how AI initiatives are attracting record funding across the board.

Even the fundamental hardware alliances are shifting. In a surprising twist, OpenAI has begun renting AI chip capacity from rival Google to power ChatGPT and other services reuters.com. A Reuters source confirmed that OpenAI, known for using primarily Microsoft’s Azure cloud and Nvidia GPUs, is now also utilizing Google’s advanced TPU (Tensor Processing Unit) chips via Google Cloud reuters.com reuters.com. This marks the first meaningful use of non-Nvidia processors by OpenAI and a rare collaboration between two AI competitors. The arrangement is seen as mutually beneficial: OpenAI gains additional computing power (potentially at lower cost) to meet surging demand, while Google wins a high-profile customer for its once-internal TPUs (which it has also leased to Apple, Anthropic, and others) reuters.com reuters.com. It also signals that even AI’s biggest players must sometimes cooperate behind the scenes to overcome chip shortages and infrastructure needs in the race to scale AI products.

The impact of AI on jobs became a focal point as well. Amazon made headlines by openly acknowledging that generative AI will eliminate some white-collar jobs at the company. CEO Andy Jassy announced that advances in AI and automation are expected to reduce Amazon’s corporate headcount over the coming years crescendo.ai. He noted that while many roles will be replaced by AI-driven systems, other roles will evolve and require employees to reskill and work alongside AI crescendo.ai. Amazon urged its workforce to embrace AI tools and training, echoing a broader trend in which companies restructure and workers retrain due to AI. This frank admission follows similar moves in media (for instance, Insider recently cut 21% of its staff while investing in AI content generation), and underscores how AI is reshaping labor markets. As Nvidia’s CEO Jensen Huang put it bluntly at a late-May event, “You’re going to lose your job to someone who uses AI” – a warning that those who fail to adapt will be left behind crescendo.ai.

AI Ethics, Policy and Oversight

With AI advancing so rapidly, ethics and regulation were hot topics in June. A coalition of tech accountability groups launched “The OpenAI Files,” an initiative to shine light on the secretive startup that sparked the generative AI boom. This project, led by the Midas Project and Tech Oversight Project, is compiling documentation of governance and ethical concerns at OpenAI – from its shift from non-profit roots to for-profit, to its handling of safety measures and investor influence techcrunch.com techcrunch.com. The OpenAI Files call for greater transparency and accountability in how companies pursue artificial general intelligence. They highlight worries that profit pressure has led to “rushed safety evaluation” and a “culture of recklessness” at OpenAI techcrunch.com. Notably, they even point to past internal turmoil – such as an attempted staff ouster of CEO Sam Altman in 2023 – as evidence that all is not well with AI leadership. “I don’t think Sam is the guy who should have the finger on the button for AGI,” former OpenAI research chief Ilya Sutskever was quoted as saying techcrunch.com. While OpenAI disputes many of these characterizations, the fact that watchdogs are publishing “dossiers” on AI firms signals the growing demand for oversight in the AI race.

In a similar vein, a group of ex-OpenAI employees went public with a letter accusing the company of sacrificing safety for speed and profit crescendo.ai. They allege that leadership ignored internal ethical concerns and even retaliated against those who raised red flags. The whistleblowers are urging stronger protections for AI researchers who speak up, and calling on regulators to hold AI companies accountable. This adds fuel to ongoing debates in Washington and Brussels over how to regulate AI development without stifling innovation. Indeed, in the U.S., voices across the political spectrum are emphasizing AI leadership and safety. One of former President Trump’s tech advisors warned that America could lose its edge in AI to China within a decade if it doesn’t move more aggressively crescendo.ai. And in Congress, committees have been holding hearings on AI risks ranging from deepfakes to job displacement, exploring new laws to govern AI use.

Europe, meanwhile, is on the cusp of enforcing the world’s first broad AI law – but not without last-minute controversy. The EU’s landmark AI Act, a comprehensive regulatory framework for artificial intelligence, is scheduled to begin taking effect in August 2025. As the deadline nears, industry groups are raising alarms that companies and regulators aren’t ready. In late June, the Computer & Communications Industry Association (CCIA) Europe urged EU leaders to pause the implementation of the AI Act ccianet.org. “Europe cannot lead on AI with one foot on the brake,” said CCIA’s Daniel Friedlaender, warning that with key guidance still unwritten just weeks before the rules kick in, rushing enforcement could “stall innovation altogether” ccianet.org. EU officials have acknowledged some challenges – for example, detailed standards for certain high-risk AI systems are delayed – but so far there is no indication the timeline will shift. The AI Act will impose requirements on AI deemed “high-risk” (like in healthcare or transport), and even restrict some practices like real-time biometric surveillance. European regulators are also establishing an AI Office and expert panel to oversee enforcement. How strictly the EU will enforce the new rules, and how businesses will comply, is a space to watch closely in the coming months.

Legal battles over AI also intensified. In the U.S., OpenAI is facing a courtroom fight with The New York Times over copyright and privacy. In June a federal judge ordered OpenAI to preserve all ChatGPT output logs relevant to the case – even logs that users had requested to delete, and even if data protection laws might normally mandate deletion adweek.com. This unusual order highlights the legal tensions around AI models that were trained on vast internet text (including news content). The Times lawsuit claims ChatGPT outputs unlawfully infringe copyrighted articles; the data preservation demand suggests the court sees the chatbot’s memory as key evidence. OpenAI argued that forcing it to retain users’ deleted chats for litigation is an overreach and plans to appeal adweek.com adweek.com. Beyond copyright, the case raises privacy questions, since ChatGPT interactions often contain sensitive user data. Altman himself commented that complying with the order undermines user privacy and called it a “crazy overreach” by the newspaper adweek.com. The outcome of this battle could set important precedents for how AI companies handle user data and intellectual property.

AI in Society: Empathy and Unexpected Applications

Beyond the headline-grabbing deals and innovations, June 2025 also brought insight into AI’s impact on society and some niche uses. In one surprising study, researchers found that AI might outperform humans in perceived empathy. A psychology experiment reported that participants actually rated AI-generated responses to personal confessions as more caring and empathetic than responses from other humans crescendo.ai. While an AI obviously doesn’t feel emotions, it appears advanced language models can mimic the language of empathy effectively – so much so that people sometimes prefer the chatbot’s comforting words. This finding raises fascinating questions about how humans perceive empathy and whether AI could play a role in counseling or emotional support. It also cautions that people might form attachments to AI “friends” or therapists, underlining the need to ensure these systems are used ethically in sensitive settings.

In the realm of wildlife conservation, AI showed its promise as a force for good. Tech giant Microsoft announced that its AI tools are aiding efforts to save endangered giraffes in Africa crescendo.ai. Conservationists are using Microsoft’s AI vision models to analyze drone and camera footage, automatically identifying and tracking giraffes across large reserves. This provides accurate population counts and movement patterns, helping wildlife organizations protect these animals from poaching and habitat loss. It’s a heartening example of how the same AI technologies driving business can be repurposed to tackle environmental challenges. From monitoring rainforest health to cleaning oceans, “AI for Good” projects like this are proliferating, offering a counterpoint to the doomsday narratives by showing how AI can help solve human and ecological problems.


In summary, June 2025 underscored how deeply and broadly AI is transforming the world. We saw cutting-edge AI models pushing creative and technical boundaries, robots getting smarter and more useful, and AI infiltrating critical domains from healthcare to warfare. Industry heavyweights poured billions into AI, even as concerns about ethics, safety, and regulation mounted. The month’s surprising stories – whether an AI drone swarm in battle, a tragic AI-fueled scam, or a chatbot showing “heart” – highlight that AI’s impacts are no longer theoretical or confined to tech circles; they are here, global, and very real. Policymakers and the public are scrambling to keep up with the pace of change. As the AI revolution charges forward into the second half of 2025, the world is watching to see whether the benefits can be harnessed and the risks managed in a way that will define the future for the better.

Sources:

Tags: , ,