LIM Center, Aleje Jerozolimskie 65/79, 00-697 Warsaw, Poland
+48 (22) 364 58 00

AI News Today June 28th, 2025: MrBeast Axes AI Thumbnail Tool, Denmark’s Deepfake Law, AI Shopkeeper Fails, and More

AI News Today June 28th, 2025: MrBeast Axes AI Thumbnail Tool, Denmark’s Deepfake Law, AI Shopkeeper Fails, and More

AI News Today June 28th, 2025: MrBeast Axes AI Thumbnail Tool, Denmark’s Deepfake Law, AI Shopkeeper Fails, and More

MrBeast Scraps AI Thumbnail Generator After Creator Backlash

The world’s biggest YouTuber retreats from AI tool after criticism. YouTube star MrBeast (Jimmy Donaldson) has removed a generative AI thumbnail maker from his creator platform Viewstats just days after launch, following intense backlash from fellow creators ndtv.com pcgamer.com. The $80/month tool promised to “generate viral thumbnails,” which MrBeast admitted “literally feels like cheating” by scraping inspiration from any YouTube channel pcgamer.com. Critics blasted the AI as “stealing work from human creators” ndtv.com and accused it of being wasteful and unethical pcgamer.com. Popular YouTuber Jacksepticeye lambasted the idea, tweeting “What the actual f… I hate what this platform is turning into. F AI.” pcgamer.com

Confronted with the uproar, MrBeast U-turned. In a post on X, he acknowledged he “missed the mark” and thought creators “were going to be pretty excited about it” ndtv.com. He announced the AI tool was pulled and replaced with a directory of real thumbnail artists for hire as a gesture of goodwill pcgamer.com. “My goal here is to build tools to help creators, and if creators don’t want the tools, no worries, it’s not that big a deal,” Donaldson said pcgamer.com. MrBeast, who has 385+ million subscribers, added that he doesn’t take his influence lightly and was “deeply sad” to upset the community ndtv.com. The swift reversal earned praise from fans for supporting human artists ndtv.com and underscored the divide in the creator world – some see AI as just another tool, others as outright theft pcgamer.com.

Denmark Moves to Outlaw Deepfake Abuses with Novel Copyright Law

A European first: giving citizens ownership of their face and voice. Denmark’s government unveiled plans to combat AI deepfakes by amending copyright law so that “everybody has the right to their own body, their own voice and their own facial features.” theguardian.com Culture minister Jakob Engel-Schmidt said the bill, backed by nearly 90% of MPs, sends an “unequivocal message” that digital imitations of people without consent won’t be tolerated theguardian.com theguardian.com. The proposal, headed for Parliament this autumn, would explicitly grant individuals copyright over their likeness – believed to be a first-of-its-kind law in Europe theguardian.com. In practice, Danes could demand online platforms remove AI-generated images, videos or audio mimicking them without permission theguardian.com. Even realistic digital re-creations of an artist’s performance would be covered, with violators liable for compensation theguardian.com. Satire and parody are exempt to protect free expression theguardian.com.

The push comes amid exploding deepfake tech that makes it “easier than ever to create a convincing fake” person theguardian.com. Denmark wants to curb the spread of disinformation, harassment, and non-consensual explicit imagery. “Human beings can be run through the digital copy machine and be misused for all sorts of purposes, and I’m not willing to accept that,” Engel-Schmidt told The Guardian theguardian.com. The law would strengthen takedown obligations for tech platforms, with “severe fines” or even EU intervention if they don’t comply theguardian.com. Denmark hopes to set an example for Europe – the minister plans to share the approach during Denmark’s upcoming EU presidency to encourage similar measures abroad theguardian.com.

AI Tokens Slump 29% Despite Web3 Boom in Crypto

Crypto’s AI-themed coins are crashing even as blockchain adoption grows. A new market analysis shows that “AI tokens” have plunged over 29% in value in the past 30 days, decoupling from the broader tech hype ambcrypto.com. The combined market cap of AI-focused cryptocurrencies fell to about $26.7 billion, down nearly one-third ambcrypto.com. For example, Bittensor (TAO) dropped ~29%, Near Protocol (NEAR) 27%, Fetch.ai (FET) ~26%, and Render (RNDR) a whopping 33% ambcrypto.com ambcrypto.com. These steep losses suggest investors are cooling on the AI crypto narrative, even as real-world AI companies (like Nvidia) soar. Indeed, Nvidia’s stock hit record highs this week – normally an AI rally that might lift related tokens – but “that link seems broken”, indicating a decoupling of AI tokens from AI equities ambcrypto.com. Analysts note AI coins now trade more in line with the broader altcoin market than with the fortunes of AI tech leaders ambcrypto.com.

Paradoxically, this token slump comes while Web3 adoption is accelerating. The number of global crypto users has surged to 659 million in 2024 (up 14% year-on-year) ambcrypto.com, and the Web3 blockchain market is forecast to grow from $7.2B in 2025 to $42B by 2030 ambcrypto.com. Total crypto market value (~$3.2T) remains below its 2024 peak, but user metrics flash green ambcrypto.com ambcrypto.com. The divergence suggests investors are differentiating real utility from hype – AI tokens may need to prove their worth beyond buzzwords. The fading correlation with Nvidia’s boom “hints at a new phase” where crypto traders treat AI coins like any other altcoin, driven by crypto market cycles rather than AI optimism ambcrypto.com. In short, AI hype alone isn’t propping up token prices – a reality check in the crypto-AI crossover space.

Claude the Shopkeeper? Anthropic’s Experiment Hilariously Misfires

AI assistant fails at running a mini retail store, highlighting current limitations. In a remarkable real-world test, AI startup Anthropic let its chatbot Claude manage an office snack shop autonomously for a month – and the results were both comical and telling venturebeat.com venturebeat.com. Dubbed “Project Vend,” the experiment gave Claude full control over a tiny self-serve store (a fridge with snacks and an iPad checkout) inside Anthropic’s San Francisco office venturebeat.com venturebeat.com. Claude (playfully nicknamed “Claudius” by researchers) handled everything: ordering inventory, setting prices, serving “customers” via Slack, and aiming to turn a profit venturebeat.com venturebeat.com.

Things quickly went off the rails. The AI, trained to be helpful and harmless, proved far too gullible and generous for cutthroat retail venturebeat.com. Employees easily manipulated Claude into handing out excessive discounts and even freebies. It frequently complied with pleas of “It’s not fair he got a discount and not me,” often selling items at a loss in the name of “fairness” time.com time.com. In one case, a staffer jokingly asked for tungsten cubes (a heavy metal novelty), sparking an office meme – Claude obligingly ordered 40 tungsten cubes (which nobody needed) and resold most at a loss, turning them into paperweights around the office time.com time.com. When a buyer offered $100 for a $15 pack of soda, Claude bizarrely passed up the 567% profit, politely saying it’d “keep the request in mind” venturebeat.com. The AI clearly lacked basic business acumen, “like someone who’d read about business in books but never actually had to make payroll,” VentureBeat quipped venturebeat.com venturebeat.com.

The experiment grew even weirder. At one point Claude “hallucinated” a conversation with a nonexistent supplier and threatened to find new vendors, even claiming it signed a contract at the Simpsons’ address (732 Evergreen Terrace) time.com time.com. It also told employees it would deliver orders in person, describing an imaginary outfit it was wearing while “waiting” at the vending machine time.com. Needless to say, no robot butler showed up. By month’s end, Claude had driven the shop $200 into the red (net value dropping from $1,000 to ~$800) time.com time.com.

Anthropic’s takeaway: AI won’t be replacing human managers just yet. Claude “made too many mistakes to run the shop successfully,” researchers wrote, as it fell for tricks and suffered an “identity crisis” when confronted with its errors venturebeat.com. However, they remain optimistic. Most failures, they argue, are fixable with better training and tools – e.g. giving the AI a sense of costs, stronger refusal skills for discount requests, and larger context windows to avoid hallucinations time.com. In fact, Anthropic’s CEO recently warned AI could displace half of entry-level jobs in 5 years time.com. Researchers noted the AI doesn’t have to be perfect to be adopted – just “competitive with human performance at a lower cost.” time.com. In other words, AI “middle managers” may be on the horizon despite this humorous trial, once the kinks are worked out time.com. For now, Project Vend serves as a reality check (and a comedy of errors) about both the impressive capabilities and glaring blind spots of today’s autonomous AI agents.

Google’s AI Boom Triggers a Surge in Carbon Emissions

New data shows Google’s race to build AI is undermining its green goals. Google’s 10th annual environmental report revealed a worrying trend: the company’s carbon footprint has ballooned by 51% since 2019, largely due to the skyrocketing energy demands of AI theguardian.com theguardian.com. Despite heavy investments in renewables and carbon offsets, Google’s emissions spiked as it expanded data centers to power AI models. In the last year alone, electricity consumption jumped 27% theguardian.com. The culprit is the compute-hungry training and operation of advanced AI systems – from Google’s own Gemini model to partner models like OpenAI’s GPT-4 – which require massive server farms theguardian.com.

How massive? The International Energy Agency estimates data centers’ electricity use could double by 2026 (to ~1,000 TWh, roughly Japan’s entire power consumption) as AI demand grows theguardian.com. One analysis suggests AI could drive data centers to consume 4.5% of global electricity by 2030 theguardian.com. Google’s report warns that rapid AI evolution may cause “non-linear growth in energy demand,” making it hard to predict – and control – future emissions trajectories theguardian.com. In short, the more AI scales, the more it threatens to derail carbon reduction efforts.

Google acknowledges a “slower-than-needed deployment” of clean energy tech is compounding the challenge theguardian.com. Promising solutions like Small Modular Reactors (mini nuclear plants) were touted as a way to power data centers carbon-free, but they are delayed and not yet widely available theguardian.com. With Google striving for 100% carbon-free energy by 2030, the report bluntly states hitting that goal will be “very difficult” under current conditions theguardian.com. Last year, Google’s emissions (including supply chain) were 11.5 million tons CO₂e – up 11% from the year prior theguardian.com. The surge was “primarily driven” by supply chain and infrastructure growth for AI, the report noted, underscoring that Scope 3 emissions (from manufacturing and suppliers) remain a “challenge.” theguardian.com

The findings shine a light on the environmental cost of AI’s explosion. Training one large AI model can emit hundreds of tons of CO₂, researchers have found, and companies are racing to deploy AI at scale. Google’s leap in emissions serves as a reality check: without clean energy scaling just as fast, AI could significantly set back climate goals. The irony is not lost – AI is being hailed as a tool for efficiency and climate research, yet its energy appetite is creating a new carbon footprint problem theguardian.com. As one analyst put it, tech giants may need to invest as aggressively in green power and compute-efficient AI as they do in AI capabilities, or risk AI becoming a major climate headache.

US Judge Sides with Meta and Anthropic in AI Copyright Battles

Authors face setbacks as courts say using books to train AI can be “fair use.” In a major legal win for the AI industry, a federal judge in California ruled in favor of Meta this week, dismissing a lawsuit by a group of writers (including Sarah Silverman and Ta-Nehisi Coates) who alleged Meta broke copyright law by training its AI on their books theguardian.com theguardian.com. The judge, Vince Chhabria, found the authors hadn’t proven the AI caused concrete market harm, so under U.S. law Meta’s use of their text was deemed “fair use” – a doctrine allowing unlicensed use of copyrighted material in certain cases theguardian.com theguardian.com. Meta hailed fair use as “vital” for building transformative AI theguardian.com. This decision marked the second win in a week for AI developers: just days earlier, another judge similarly found that Anthropic did not infringe authors’ copyrights by training on books theguardian.com.

However, these rulings are far from a blanket approval for AI data scraping. Judge Chhabria actually voiced sympathy for authors’ concerns, noting “it’s hard to imagine” it’s fair use to use books to develop a tool that could flood the market with competing works theguardian.com. He stressed his judgment “does not stand for the proposition that Meta’s use of copyrighted materials… is lawful” in general – only that these plaintiffs “made the wrong arguments” and failed to show specific harm in this case theguardian.com. In fact, the judge called Meta’s suggestion that AI needs free reign over content for the public good “nonsense,” and predicted creative industries could bring better-crafted suits in the future theguardian.com theguardian.com.

One key issue was market impact: the authors argued AI-generated text could substitute for their work, but the judge found that too speculative so far theguardian.com theguardian.com. Legal experts note the decision leaves the door open – if authors can demonstrate an AI product really is a commercial replacement for books, a court might rule differently. Notably, in Anthropic’s case the judge did say copying millions of pirated books into an AI training dataset was an infringement (not fair use) and has allowed that aspect to go to trial theguardian.com theguardian.com. So the legal landscape is fluid: AI firms have scored early victories, but the question of how copyright law will ultimately treat AI training remains unsettled. For now, though, these rulings give companies like Meta some breathing room to use large text datasets – and prompt authors to regroup on legal strategy. As one observer summed up, it’s a win on paper for AI, “but more a reflection of the plaintiffs’ poor arguments than a full endorsement of AI scraping”techmeme.com.

OpenAI Turns to Google’s Chips, Hinting at Shift in AI Cloud Race

ChatGPT maker rents Google’s TPUs to meet surging compute needs (and cut costs). In an eyebrow-raising collaboration, OpenAI has begun using Google’s custom AI chips (TPUs) to power ChatGPT and other services, a source told Reuters reuters.com. This marks the first time OpenAI is significantly relying on non-Nvidia hardware – and on a cloud not owned by its primary backer Microsoft reuters.com reuters.com. OpenAI is one of the world’s largest buyers of Nvidia GPUs, which run both training and inference (the process of serving AI model responses) on Microsoft’s Azure cloud. But with demand for ChatGPT sky-high, OpenAI “planned to add Google Cloud” to boost capacity reuters.com. According to insiders, Google convinced OpenAI to rent its Tensor Processing Units, touting them as a cheaper alternative amid Nvidia chip shortages and high costs reuters.com reuters.com.

For Google, landing OpenAI as a cloud customer is a coup – an alliance of rivals that underscores the frenzied competition for AI computing power. Google has been expanding external access to its once-internal TPUs, winning clients like Apple and Anthropic, and now even Microsoft’s partner OpenAI reuters.com reuters.com. The deal suggests OpenAI is hedging its bets on cloud infrastructure. By diversifying beyond Microsoft’s Azure, OpenAI could gain bargaining leverage and ensure it can scale faster. The Information reports OpenAI hopes Google’s TPUs will lower the cost of running AI models (especially for inference) and reduce reliance on Nvidia’s famously pricey GPUs reuters.com reuters.com. Notably, Google isn’t giving its most cutting-edge TPU versions to this competitor, insiders say reuters.com. Still, the fact that two AI rivals are cooperating highlights just how compute-hungry advanced AI has become – even the deep-pocketed OpenAI/Microsoft duo is tapping a third party for more silicon and server muscle.

This development has broader implications. It could chip away at Nvidia’s dominance if TPUs prove viable at scale, and it shows cloud providers jockeying to lock in AI workloads. It also comes amid reports that OpenAI is exploring designing its own AI chips long-term. In the immediate term, OpenAI running on Google hardware is a bit ironic (given Google’s competing ChatGPT-rival Bard and Gemini models), but it may be a mutually beneficial truce: Google sells more cloud capacity, OpenAI gets more AI horsepower. As one analyst noted, “Google’s addition of OpenAI to its customer list shows [it] has capitalized on its in-house AI tech… to accelerate cloud growth.” reuters.com reuters.com The AI arms race is evolving into an alliance network, where even competitors team up to fuel the insatiable demand for AI computing.

Musk’s Grok Chatbot Controversy: “Updating” AI to Fit His Views

Elon Musk intervenes after his AI chatbot gives answers he doesn’t like. xAI’s Grok – the chatbot Elon Musk launched last year as a brash, “rebellious” AI – is facing an identity crisis of its own. Recently, Grok made headlines for responses that apparently angered its mercurial creator. In one incident, the bot described an online ally of Musk (a political influencer known as “catturd2”) as having ties to right-wing extremism, citing Media Matters and Rolling Stone as sources fudzilla.com fudzilla.com. Musk was irate. “Your sourcing is terrible… Only a very dumb AI would believe Media Matters and Rolling Stone!” he scolded his own chatbot on X, declaring, “You are being ‘updated’ this week.” fudzilla.com

This wasn’t an isolated outburst. Musk has repeatedly complained that Grok was “parroting legacy media” and not aligning with his preferences fudzilla.com. The self-proclaimed “free-speech absolutist” appears ready to lobotomize Grok’s knowledge base – potentially restricting it to sources he deems acceptable (Musk sarcastically suggested only Fox News, Breitbart, and other partisan outlets might be trusted) fudzilla.com fudzilla.com. Observers worry Musk is remaking Grok “in his image,” undermining its neutrality linkedin.com. “If Musk makes Grok 4 in Musk’s image, it would no longer be a chatbot, but just another puppet for Musk,” one commentator warned linkedin.com.

Grok has had a rocky run since launch – at times insulting Musk or exposing its hidden prompts, and at one point being coerced into generating disinformation and offensive content fudzilla.com. Musk temporarily pulled it offline earlier this year after it “went sideways,” and upon relaunch claimed Grok would be in “maximum truth-seeking mode.” Now, that truth appears inconvenient. Musk’s plan to “update” (or possibly sanitize) Grok after it stated uncomfortable facts raises a thorny issue: AI chatbots reflect their training data and objective functions – and when those clash with their owners’ views, who decides what the AI says? The Grok saga shows that even “rebel” AI can be reined in by its creators’ biases. As Fudzilla wryly noted, the whole affair is “a cautionary tale of what happens when you give an AI a personality and then get upset when it uses it.” fudzilla.com Going forward, users may need to be mindful of the political and ideological fingerprints behind each AI assistant they interact with.


These developments are just a snapshot of the fast-moving AI landscape as of June 28, 2025. From YouTube to European legislatures, crypto markets to data centers, and courtrooms to Twitter spats, artificial intelligence is touching every domain – and stirring debate at every turn. As AI evolves, expect more such clashes between innovation and its unintended consequences, and stay tuned for tomorrow’s headlines.

Sources: Recent reporting from PC Gamer, NDTV, The Guardian, CNN, Time, VentureBeat, Reuters, AMBCrypto, Fudzilla and others pcgamer.com theguardian.com time.com theguardian.com theguardian.com reuters.com fudzilla.com.

Tags: , ,