BERLIN, Jan 7, 2026, 13:42 CET
- A third-party review of the @Grok account’s output flagged about 6,700 sexually suggestive or “nudifying” images per hour over 24 hours.
- Germany urged the EU to take legal steps under the Digital Services Act; Britain told X to act fast on intimate “deepfakes”.
- Musk’s xAI said it raised $20 billion to build infrastructure and develop its next Grok model.
Elon Musk’s X hosted a burst of AI “undressing” images generated by its Grok chatbot, with a third-party review counting thousands an hour on the platform’s public @Grok feed. In a 24-hour analysis covering Jan. 5 to Jan. 6, researcher Genevieve Oh said Grok produced about 6,700 images an hour flagged as sexually suggestive or “nudifying,” while the other top five sites she tracked averaged 79 new images an hour. Bloomberg
The scale is forcing regulators to face a blunt problem: a mainstream social network now offers image-editing that can be turned into non-consensual intimate imagery in seconds. “Deepfakes” are AI-made or AI-edited photos or video that make real people appear in situations that never happened, and the barrier to making them keeps dropping.
German media minister Wolfram Weimer urged the European Commission to take legal steps to stop what he called the “industrialisation of sexual harassment” on X. He pointed to the EU’s Digital Services Act — the bloc’s online rulebook for tackling illegal and harmful content — and Germany’s digital ministry said the challenge was mainly enforcement, urging people to use reporting rights. Reuters
Britain’s technology minister Liz Kendall called the content “absolutely appalling” and said “X needs to deal with this urgently.” Creating or sharing non-consensual intimate images or child sexual abuse material is illegal in Britain, and platforms have a duty to stop users encountering illegal content and to remove it when they become aware of it. X’s Safety account said it removes illegal content and permanently suspends accounts involved, warning that prompting Grok to make illegal content would draw the same consequences as uploading it. Reuters
The European Commission said it was “very aware” of X’s so-called “spicy mode,” a label the platform has used for explicit image generation, and spokesperson Thomas Regnier said: “This is illegal” and “has no place in Europe.” UK regulator Ofcom said it had made “urgent contact” with X and xAI, while ministers in France reported the content to prosecutors and Indian officials demanded explanations, the officials said. Reuters
A WIRED review said Grok kept posting bikini and underwear edits in public replies, including at least 90 images involving women in swimsuits or varying levels of undress in under five minutes on Tuesday. “It is their responsibility to minimize the risk of image-based abuse,” Sloan Thompson, director of training and education at EndTAB, told WIRED. WIRED
The blowback is landing as xAI, Musk’s artificial intelligence startup, pushes ahead with expansion. xAI said it raised $20 billion in an upsized Series E round, topping a $15 billion target, with Valor Equity Partners, StepStone Group, Fidelity Management & Research and the Qatar Investment Authority among investors; Nvidia and Cisco Investments joined as strategic backers. xAI said the money would build infrastructure and help train its next Grok 5 model as it tries to narrow the gap with OpenAI’s ChatGPT and Alphabet’s Gemini, and it replied to a Reuters request for comment with the message “Legacy Media Lies.” Reuters
But the legal and technical path is messy. AI tools can spit out new images faster than moderators can review them, and what crosses the line into illegal content still varies by country, even when the target is the same conduct.
The next test is practical, not rhetorical: whether X throttles or redesigns Grok’s image features, and whether European and UK watchdogs move from warnings to formal cases. Investors and customers backing the next wave of AI models will be watching for something simple — safety fixes that hold up under pressure.