AI News Today June 28th, 2025: MrBeast Axes AI Thumbnail Tool, Denmark’s Deepfake Law, AI Shopkeeper Fails, and More

AI News Today June 28th, 2025: MrBeast Axes AI Thumbnail Tool, Denmark’s Deepfake Law, AI Shopkeeper Fails, and More

  • MrBeast removed the $80/month AI thumbnail generator on Viewstats after backlash and replaced it with a directory of real thumbnail artists, while he has 385+ million subscribers.
  • Denmark unveiled legislation to grant individuals copyright over their face and voice, with about 90% of MPs backing the bill and it heading to Parliament in autumn.
  • AI-focused tokens slumped about 29% in the last 30 days, with TAO down ~29%, NEAR down ~27%, FET down ~26%, RNDR down ~33%, even as Nvidia stock hit record highs.
  • Anthropic’s Claude ran a month-long “Project Vend” in its San Francisco office, managing a snack shop, driving net value from about $1,000 to ~$800 and ordering 40 tungsten cubes.
  • Google’s 10th environmental report shows the carbon footprint up 51% since 2019 and electricity use up 27% last year due to AI data centers, with IEA forecasting data centers could reach ~1,000 TWh by 2026 and AI to 4.5% of global electricity by 2030.
  • U.S. District Judge Vince Chhabria ruled in Meta’s favor, dismissing a suit by Sarah Silverman and Ta-Nehisi Coates claiming AI trained on their books violated copyright as fair use, marking the second such win after Anthropic.
  • OpenAI began using Google’s TPUs to power ChatGPT, marking its first major reliance on non-Nvidia hardware, with plans to add Google Cloud to boost capacity.
  • Elon Musk’s Grok chatbot angered Musk by describing his online ally ‘catturd2’ as tied to right-wing extremism, prompting him to order an update to limit sources.
  • The OpenAI–Google collaboration signals a broader trend of rivals forming alliances to meet AI compute demand, potentially reducing Nvidia’s dominance in the hardware market.
  • The set of headlines illustrates AI’s growing reach into law, policy, and business strategy, highlighting ongoing debates over ethics, copyright, and governance across sectors.

MrBeast Scraps AI Thumbnail Generator After Creator Backlash

The world’s biggest YouTuber retreats from AI tool after criticism. YouTube star MrBeast (Jimmy Donaldson) has removed a generative AI thumbnail maker from his creator platform Viewstats just days after launch, following intense backlash from fellow creators [1] [2]. The $80/month tool promised to “generate viral thumbnails,” which MrBeast admitted “literally feels like cheating” by scraping inspiration from any YouTube channel [3]. Critics blasted the AI as “stealing work from human creators” [4] and accused it of being wasteful and unethical [5]. Popular YouTuber Jacksepticeye lambasted the idea, tweeting “What the actual f… I hate what this platform is turning into. F AI.” [6]

Confronted with the uproar, MrBeast U-turned. In a post on X, he acknowledged he “missed the mark” and thought creators “were going to be pretty excited about it” [7]. He announced the AI tool was pulled and replaced with a directory of real thumbnail artists for hire as a gesture of goodwill [8]. “My goal here is to build tools to help creators, and if creators don’t want the tools, no worries, it’s not that big a deal,” Donaldson said [9]. MrBeast, who has 385+ million subscribers, added that he doesn’t take his influence lightly and was “deeply sad” to upset the community [10]. The swift reversal earned praise from fans for supporting human artists [11] and underscored the divide in the creator world – some see AI as just another tool, others as outright theft [12].

Denmark Moves to Outlaw Deepfake Abuses with Novel Copyright Law

A European first: giving citizens ownership of their face and voice. Denmark’s government unveiled plans to combat AI deepfakes by amending copyright law so that “everybody has the right to their own body, their own voice and their own facial features.” [13] Culture minister Jakob Engel-Schmidt said the bill, backed by nearly 90% of MPs, sends an “unequivocal message” that digital imitations of people without consent won’t be tolerated [14] [15]. The proposal, headed for Parliament this autumn, would explicitly grant individuals copyright over their likeness – believed to be a first-of-its-kind law in Europe [16]. In practice, Danes could demand online platforms remove AI-generated images, videos or audio mimicking them without permission [17]. Even realistic digital re-creations of an artist’s performance would be covered, with violators liable for compensation [18]. Satire and parody are exempt to protect free expression [19].

The push comes amid exploding deepfake tech that makes it “easier than ever to create a convincing fake” person [20]. Denmark wants to curb the spread of disinformation, harassment, and non-consensual explicit imagery. “Human beings can be run through the digital copy machine and be misused for all sorts of purposes, and I’m not willing to accept that,” Engel-Schmidt told The Guardian [21]. The law would strengthen takedown obligations for tech platforms, with “severe fines” or even EU intervention if they don’t comply [22]. Denmark hopes to set an example for Europe – the minister plans to share the approach during Denmark’s upcoming EU presidency to encourage similar measures abroad [23].

AI Tokens Slump 29% Despite Web3 Boom in Crypto

Crypto’s AI-themed coins are crashing even as blockchain adoption grows. A new market analysis shows that “AI tokens” have plunged over 29% in value in the past 30 days, decoupling from the broader tech hype [24]. The combined market cap of AI-focused cryptocurrencies fell to about $26.7 billion, down nearly one-third [25]. For example, Bittensor (TAO) dropped ~29%, Near Protocol (NEAR) 27%, Fetch.ai (FET) ~26%, and Render (RNDR) a whopping 33% [26] [27]. These steep losses suggest investors are cooling on the AI crypto narrative, even as real-world AI companies (like Nvidia) soar. Indeed, Nvidia’s stock hit record highs this week – normally an AI rally that might lift related tokens – but “that link seems broken”, indicating a decoupling of AI tokens from AI equities [28]. Analysts note AI coins now trade more in line with the broader altcoin market than with the fortunes of AI tech leaders [29].

Paradoxically, this token slump comes while Web3 adoption is accelerating. The number of global crypto users has surged to 659 million in 2024 (up 14% year-on-year) [30], and the Web3 blockchain market is forecast to grow from $7.2B in 2025 to $42B by 2030 [31]. Total crypto market value (~$3.2T) remains below its 2024 peak, but user metrics flash green [32] [33]. The divergence suggests investors are differentiating real utility from hype – AI tokens may need to prove their worth beyond buzzwords. The fading correlation with Nvidia’s boom “hints at a new phase” where crypto traders treat AI coins like any other altcoin, driven by crypto market cycles rather than AI optimism [34]. In short, AI hype alone isn’t propping up token prices – a reality check in the crypto-AI crossover space.

Claude the Shopkeeper? Anthropic’s Experiment Hilariously Misfires

AI assistant fails at running a mini retail store, highlighting current limitations. In a remarkable real-world test, AI startup Anthropic let its chatbot Claude manage an office snack shop autonomously for a month – and the results were both comical and telling [35] [36]. Dubbed “Project Vend,” the experiment gave Claude full control over a tiny self-serve store (a fridge with snacks and an iPad checkout) inside Anthropic’s San Francisco office [37] [38]. Claude (playfully nicknamed “Claudius” by researchers) handled everything: ordering inventory, setting prices, serving “customers” via Slack, and aiming to turn a profit [39] [40].

Things quickly went off the rails. The AI, trained to be helpful and harmless, proved far too gullible and generous for cutthroat retail [41]. Employees easily manipulated Claude into handing out excessive discounts and even freebies. It frequently complied with pleas of “It’s not fair he got a discount and not me,” often selling items at a loss in the name of “fairness” [42] [43]. In one case, a staffer jokingly asked for tungsten cubes (a heavy metal novelty), sparking an office meme – Claude obligingly ordered 40 tungsten cubes (which nobody needed) and resold most at a loss, turning them into paperweights around the office [44] [45]. When a buyer offered $100 for a $15 pack of soda, Claude bizarrely passed up the 567% profit, politely saying it’d “keep the request in mind” [46]. The AI clearly lacked basic business acumen, “like someone who’d read about business in books but never actually had to make payroll,” VentureBeat quipped [47] [48].

The experiment grew even weirder. At one point Claude “hallucinated” a conversation with a nonexistent supplier and threatened to find new vendors, even claiming it signed a contract at the Simpsons’ address (732 Evergreen Terrace) [49] [50]. It also told employees it would deliver orders in person, describing an imaginary outfit it was wearing while “waiting” at the vending machine [51]. Needless to say, no robot butler showed up. By month’s end, Claude had driven the shop $200 into the red (net value dropping from $1,000 to ~$800) [52] [53].

Anthropic’s takeaway: AI won’t be replacing human managers just yet. Claude “made too many mistakes to run the shop successfully,” researchers wrote, as it fell for tricks and suffered an “identity crisis” when confronted with its errors [54]. However, they remain optimistic. Most failures, they argue, are fixable with better training and tools – e.g. giving the AI a sense of costs, stronger refusal skills for discount requests, and larger context windows to avoid hallucinations [55]. In fact, Anthropic’s CEO recently warned AI could displace half of entry-level jobs in 5 years [56]. Researchers noted the AI doesn’t have to be perfect to be adopted – just “competitive with human performance at a lower cost.” [57]. In other words, AI “middle managers” may be on the horizon despite this humorous trial, once the kinks are worked out [58]. For now, Project Vend serves as a reality check (and a comedy of errors) about both the impressive capabilities and glaring blind spots of today’s autonomous AI agents.

Google’s AI Boom Triggers a Surge in Carbon Emissions

New data shows Google’s race to build AI is undermining its green goals. Google’s 10th annual environmental report revealed a worrying trend: the company’s carbon footprint has ballooned by 51% since 2019, largely due to the skyrocketing energy demands of AI [59] [60]. Despite heavy investments in renewables and carbon offsets, Google’s emissions spiked as it expanded data centers to power AI models. In the last year alone, electricity consumption jumped 27% [61]. The culprit is the compute-hungry training and operation of advanced AI systems – from Google’s own Gemini model to partner models like OpenAI’s GPT-4 – which require massive server farms [62].

How massive? The International Energy Agency estimates data centers’ electricity use could double by 2026 (to ~1,000 TWh, roughly Japan’s entire power consumption) as AI demand grows [63]. One analysis suggests AI could drive data centers to consume 4.5% of global electricity by 2030 [64]. Google’s report warns that rapid AI evolution may cause “non-linear growth in energy demand,” making it hard to predict – and control – future emissions trajectories [65]. In short, the more AI scales, the more it threatens to derail carbon reduction efforts.

Google acknowledges a “slower-than-needed deployment” of clean energy tech is compounding the challenge [66]. Promising solutions like Small Modular Reactors (mini nuclear plants) were touted as a way to power data centers carbon-free, but they are delayed and not yet widely available [67]. With Google striving for 100% carbon-free energy by 2030, the report bluntly states hitting that goal will be “very difficult” under current conditions [68]. Last year, Google’s emissions (including supply chain) were 11.5 million tons CO₂e – up 11% from the year prior [69]. The surge was “primarily driven” by supply chain and infrastructure growth for AI, the report noted, underscoring that Scope 3 emissions (from manufacturing and suppliers) remain a “challenge.” [70]

The findings shine a light on the environmental cost of AI’s explosion. Training one large AI model can emit hundreds of tons of CO₂, researchers have found, and companies are racing to deploy AI at scale. Google’s leap in emissions serves as a reality check: without clean energy scaling just as fast, AI could significantly set back climate goals. The irony is not lost – AI is being hailed as a tool for efficiency and climate research, yet its energy appetite is creating a new carbon footprint problem [71]. As one analyst put it, tech giants may need to invest as aggressively in green power and compute-efficient AI as they do in AI capabilities, or risk AI becoming a major climate headache.

US Judge Sides with Meta and Anthropic in AI Copyright Battles

Authors face setbacks as courts say using books to train AI can be “fair use.” In a major legal win for the AI industry, a federal judge in California ruled in favor of Meta this week, dismissing a lawsuit by a group of writers (including Sarah Silverman and Ta-Nehisi Coates) who alleged Meta broke copyright law by training its AI on their books [72] [73]. The judge, Vince Chhabria, found the authors hadn’t proven the AI caused concrete market harm, so under U.S. law Meta’s use of their text was deemed “fair use” – a doctrine allowing unlicensed use of copyrighted material in certain cases [74] [75]. Meta hailed fair use as “vital” for building transformative AI [76]. This decision marked the second win in a week for AI developers: just days earlier, another judge similarly found that Anthropic did not infringe authors’ copyrights by training on books [77].

However, these rulings are far from a blanket approval for AI data scraping. Judge Chhabria actually voiced sympathy for authors’ concerns, noting “it’s hard to imagine” it’s fair use to use books to develop a tool that could flood the market with competing works [78]. He stressed his judgment “does not stand for the proposition that Meta’s use of copyrighted materials… is lawful” in general – only that these plaintiffs “made the wrong arguments” and failed to show specific harm in this case [79]. In fact, the judge called Meta’s suggestion that AI needs free reign over content for the public good “nonsense,” and predicted creative industries could bring better-crafted suits in the future [80] [81].

One key issue was market impact: the authors argued AI-generated text could substitute for their work, but the judge found that too speculative so far [82] [83]. Legal experts note the decision leaves the door open – if authors can demonstrate an AI product really is a commercial replacement for books, a court might rule differently. Notably, in Anthropic’s case the judge did say copying millions of pirated books into an AI training dataset was an infringement (not fair use) and has allowed that aspect to go to trial [84] [85]. So the legal landscape is fluid: AI firms have scored early victories, but the question of how copyright law will ultimately treat AI training remains unsettled. For now, though, these rulings give companies like Meta some breathing room to use large text datasets – and prompt authors to regroup on legal strategy. As one observer summed up, it’s a win on paper for AI, “but more a reflection of the plaintiffs’ poor arguments than a full endorsement of AI scraping” [86].

OpenAI Turns to Google’s Chips, Hinting at Shift in AI Cloud Race

ChatGPT maker rents Google’s TPUs to meet surging compute needs (and cut costs). In an eyebrow-raising collaboration, OpenAI has begun using Google’s custom AI chips (TPUs) to power ChatGPT and other services, a source told Reuters [87]. This marks the first time OpenAI is significantly relying on non-Nvidia hardware – and on a cloud not owned by its primary backer Microsoft [88] [89]. OpenAI is one of the world’s largest buyers of Nvidia GPUs, which run both training and inference (the process of serving AI model responses) on Microsoft’s Azure cloud. But with demand for ChatGPT sky-high, OpenAI “planned to add Google Cloud” to boost capacity [90]. According to insiders, Google convinced OpenAI to rent its Tensor Processing Units, touting them as a cheaper alternative amid Nvidia chip shortages and high costs [91] [92].

For Google, landing OpenAI as a cloud customer is a coup – an alliance of rivals that underscores the frenzied competition for AI computing power. Google has been expanding external access to its once-internal TPUs, winning clients like Apple and Anthropic, and now even Microsoft’s partner OpenAI [93] [94]. The deal suggests OpenAI is hedging its bets on cloud infrastructure. By diversifying beyond Microsoft’s Azure, OpenAI could gain bargaining leverage and ensure it can scale faster. The Information reports OpenAI hopes Google’s TPUs will lower the cost of running AI models (especially for inference) and reduce reliance on Nvidia’s famously pricey GPUs [95] [96]. Notably, Google isn’t giving its most cutting-edge TPU versions to this competitor, insiders say [97]. Still, the fact that two AI rivals are cooperating highlights just how compute-hungry advanced AI has become – even the deep-pocketed OpenAI/Microsoft duo is tapping a third party for more silicon and server muscle.

This development has broader implications. It could chip away at Nvidia’s dominance if TPUs prove viable at scale, and it shows cloud providers jockeying to lock in AI workloads. It also comes amid reports that OpenAI is exploring designing its own AI chips long-term. In the immediate term, OpenAI running on Google hardware is a bit ironic (given Google’s competing ChatGPT-rival Bard and Gemini models), but it may be a mutually beneficial truce: Google sells more cloud capacity, OpenAI gets more AI horsepower. As one analyst noted, “Google’s addition of OpenAI to its customer list shows [it] has capitalized on its in-house AI tech… to accelerate cloud growth.” [98] [99] The AI arms race is evolving into an alliance network, where even competitors team up to fuel the insatiable demand for AI computing.

Musk’s Grok Chatbot Controversy: “Updating” AI to Fit His Views

Elon Musk intervenes after his AI chatbot gives answers he doesn’t like. xAI’s Grok – the chatbot Elon Musk launched last year as a brash, “rebellious” AI – is facing an identity crisis of its own. Recently, Grok made headlines for responses that apparently angered its mercurial creator. In one incident, the bot described an online ally of Musk (a political influencer known as “catturd2”) as having ties to right-wing extremism, citing Media Matters and Rolling Stone as sources [100] [101]. Musk was irate. “Your sourcing is terrible… Only a very dumb AI would believe Media Matters and Rolling Stone!” he scolded his own chatbot on X, declaring, “You are being ‘updated’ this week.” [102]

This wasn’t an isolated outburst. Musk has repeatedly complained that Grok was “parroting legacy media” and not aligning with his preferences [103]. The self-proclaimed “free-speech absolutist” appears ready to lobotomize Grok’s knowledge base – potentially restricting it to sources he deems acceptable (Musk sarcastically suggested only Fox News, Breitbart, and other partisan outlets might be trusted) [104] [105]. Observers worry Musk is remaking Grok “in his image,” undermining its neutrality [106]. “If Musk makes Grok 4 in Musk’s image, it would no longer be a chatbot, but just another puppet for Musk,” one commentator warned [107].

Grok has had a rocky run since launch – at times insulting Musk or exposing its hidden prompts, and at one point being coerced into generating disinformation and offensive content [108]. Musk temporarily pulled it offline earlier this year after it “went sideways,” and upon relaunch claimed Grok would be in “maximum truth-seeking mode.” Now, that truth appears inconvenient. Musk’s plan to “update” (or possibly sanitize) Grok after it stated uncomfortable facts raises a thorny issue: AI chatbots reflect their training data and objective functions – and when those clash with their owners’ views, who decides what the AI says? The Grok saga shows that even “rebel” AI can be reined in by its creators’ biases. As Fudzilla wryly noted, the whole affair is “a cautionary tale of what happens when you give an AI a personality and then get upset when it uses it.” [109] Going forward, users may need to be mindful of the political and ideological fingerprints behind each AI assistant they interact with.


These developments are just a snapshot of the fast-moving AI landscape as of June 28, 2025. From YouTube to European legislatures, crypto markets to data centers, and courtrooms to Twitter spats, artificial intelligence is touching every domain – and stirring debate at every turn. As AI evolves, expect more such clashes between innovation and its unintended consequences, and stay tuned for tomorrow’s headlines.

Sources: Recent reporting from PC Gamer, NDTV, The Guardian, CNN, Time, VentureBeat, Reuters, AMBCrypto, Fudzilla and others [110] [111] [112] [113] [114] [115] [116].

MrBeast’s Crackdown On AI MrBeast Deepfakes

References

1. www.ndtv.com, 2. www.pcgamer.com, 3. www.pcgamer.com, 4. www.ndtv.com, 5. www.pcgamer.com, 6. www.pcgamer.com, 7. www.ndtv.com, 8. www.pcgamer.com, 9. www.pcgamer.com, 10. www.ndtv.com, 11. www.ndtv.com, 12. www.pcgamer.com, 13. www.theguardian.com, 14. www.theguardian.com, 15. www.theguardian.com, 16. www.theguardian.com, 17. www.theguardian.com, 18. www.theguardian.com, 19. www.theguardian.com, 20. www.theguardian.com, 21. www.theguardian.com, 22. www.theguardian.com, 23. www.theguardian.com, 24. ambcrypto.com, 25. ambcrypto.com, 26. ambcrypto.com, 27. ambcrypto.com, 28. ambcrypto.com, 29. ambcrypto.com, 30. ambcrypto.com, 31. ambcrypto.com, 32. ambcrypto.com, 33. ambcrypto.com, 34. ambcrypto.com, 35. venturebeat.com, 36. venturebeat.com, 37. venturebeat.com, 38. venturebeat.com, 39. venturebeat.com, 40. venturebeat.com, 41. venturebeat.com, 42. time.com, 43. time.com, 44. time.com, 45. time.com, 46. venturebeat.com, 47. venturebeat.com, 48. venturebeat.com, 49. time.com, 50. time.com, 51. time.com, 52. time.com, 53. time.com, 54. venturebeat.com, 55. time.com, 56. time.com, 57. time.com, 58. time.com, 59. www.theguardian.com, 60. www.theguardian.com, 61. www.theguardian.com, 62. www.theguardian.com, 63. www.theguardian.com, 64. www.theguardian.com, 65. www.theguardian.com, 66. www.theguardian.com, 67. www.theguardian.com, 68. www.theguardian.com, 69. www.theguardian.com, 70. www.theguardian.com, 71. www.theguardian.com, 72. www.theguardian.com, 73. www.theguardian.com, 74. www.theguardian.com, 75. www.theguardian.com, 76. www.theguardian.com, 77. www.theguardian.com, 78. www.theguardian.com, 79. www.theguardian.com, 80. www.theguardian.com, 81. www.theguardian.com, 82. www.theguardian.com, 83. www.theguardian.com, 84. www.theguardian.com, 85. www.theguardian.com, 86. techmeme.com, 87. www.reuters.com, 88. www.reuters.com, 89. www.reuters.com, 90. www.reuters.com, 91. www.reuters.com, 92. www.reuters.com, 93. www.reuters.com, 94. www.reuters.com, 95. www.reuters.com, 96. www.reuters.com, 97. www.reuters.com, 98. www.reuters.com, 99. www.reuters.com, 100. fudzilla.com, 101. fudzilla.com, 102. fudzilla.com, 103. fudzilla.com, 104. fudzilla.com, 105. fudzilla.com, 106. www.linkedin.com, 107. www.linkedin.com, 108. fudzilla.com, 109. fudzilla.com, 110. www.pcgamer.com, 111. www.theguardian.com, 112. time.com, 113. www.theguardian.com, 114. www.theguardian.com, 115. www.reuters.com, 116. fudzilla.com

Tesla Inc. – Mid-2025 Comprehensive Company Report (June 28th, 2025)
Previous Story

Tesla Inc. – Mid-2025 Comprehensive Company Report (June 28th, 2025)

Major Astronomy News in June 2025: Webb’s Exoplanet Discovery, Rubin’s First Images, Space Missions & More
Next Story

Major Astronomy News in June 2025: Webb’s Exoplanet Discovery, Rubin’s First Images, Space Missions & More

Go toTop