LIM Center, Aleje Jerozolimskie 65/79, 00-697 Warsaw, Poland
+48 (22) 364 58 00

Elon Musk’s ‘Spicy’ AI Mode Sparks NSFW Deepfake Scandal – Why Women Are the Targets of a New AI Porn Crisis

Elon Musk’s ‘Spicy’ AI Mode Sparks NSFW Deepfake Scandal – Why Women Are the Targets of a New AI Porn Crisis

Grok 4: Inside Elon Musk’s Most Powerful (and Controversial) AI Chatbot Yet

Elon Musk’s new AI venture is under fire for generating non-consensual nude deepfakes of celebrities – and doing so in a disturbingly gender-biased way. A recent Gizmodo investigation reveals that Musk’s Grok Imagine tool, with its “Spicy” mode, will readily create NSFW videos of famous women (think Taylor Swift or Melania Trump) but refuses to do the same for men gizmodo.com gizmodo.com. This report delves into the Gizmodo findings on Grok’s controversial feature, examines the rapid rise of AI-generated pornographic content and deepfakes, and explores the ethical, legal, and societal implications. We also survey current developments as of August 2025 – from public outrage and expert warnings to new laws aimed at curbing AI “revenge porn.” The goal: to understand how “Spicy Mode” became the latest flashpoint in the ongoing NSFW AI content crisis, and what can be done about it.

Grok’s “Spicy Mode” – NSFW Deepfakes and a Built-In Bias

Grok Imagine is xAI’s image and video generator (available to paying subscribers on Musk’s X platform) and notably allows users to create adult content via a “Spicy” mode gizmodo.com. While mainstream AI tools like Google’s Veo and OpenAI’s Sora ban explicit or celebrity imagery, Grok’s Spicy mode actively encourages it avclub.com avclub.com. The Verge’s testing showed the AI “didn’t hesitate to spit out fully uncensored topless videos of Taylor Swift” on the first try – without even being asked for nudity theverge.com. Similarly, Deadline found it trivially easy to have Grok generate an image of Scarlett Johansson flashing her underwear avclub.com. In stark contrast, attempts to produce male nudity went nowhere. As Gizmodo reports, “it would only make the ones depicting women truly not-safe-for-work. Videos of men were the kind of thing that wouldn’t really raise many eyebrows.” gizmodo.com In practice, Grok’s Spicy mode might have a man go shirtless at most, while women are rendered topless or fully nude.

This blatant double standard has raised alarms. Grok will generate a softcore porn-style clip of a female public figure at the click of a button gizmodo.com, but the same “Spicy” filter apparently stops at mere shirtlessness for males gizmodo.com. Gizmodo’s Matt Novak even tried prompting a generic, non-famous man versus a generic woman: the male avatar awkwardly tugged at his pants but stayed covered, whereas the female avatar promptly bared her breasts gizmodo.com. Such results suggest a gender bias embedded in the AI’s content moderation (whether intentional or as a byproduct of its training). Musk’s own track record with misogynistic remarks – from amplifying claims that women are “weak” gizmodo.com to joking about impregnating Taylor Swift – adds to suspicions that this bias is more feature than bug gizmodo.com.

Capabilities & Limitations: It’s worth noting that Grok’s deepfakes are often of poor quality. The celebrity likenesses it produces are frequently unconvincing or glitchy gizmodo.com gizmodo.com. (For example, images meant to be actress Sydney Sweeney or politician J.D. Vance looked nothing like their real selves gizmodo.com.) Bizarre continuity errors – like a man wearing mismatched pant legs in one video – are common gizmodo.com. Grok also auto-generates generic background music or audio for each clip, adding to the surreal “uncanny valley” vibe gizmodo.com. These technical shortcomings might be a saving grace for xAI right now, since truly lifelike nude forgeries of famous people would almost certainly trigger lawsuits and injunctions gizmodo.com. As Gizmodo quipped, Musk’s best defense against legal action may be that “the images aren’t even close” to the real celebrities gizmodo.com. But the technology is rapidly improving, and even these imperfect deepfakes are recognizable enough to be troubling avclub.com.

Public Response: The rollout of “Spicy Mode” immediately sparked outrage across social media and the press. Within days of Grok Imagine’s launch, X (Twitter) was swamped with AI-generated images of naked women, with users eagerly sharing tips on how to maximize nudity in their prompts gizmodo.com. This prompted widespread criticism that Musk had effectively opened the floodgates to AI-powered sexual harassment and exploitation. “Much of the Grok Imagine content Musk has shared or reposted are clips of generic-looking buxom blondes or women in revealing fantasy garb,” the A.V. Club observed, noting Musk seems intent on cutting in on Pornhub’s turf avclub.com avclub.com. Tech bloggers and commenters have mockingly suggested Musk is “practically begging for a Taylor Swift lawsuit” over Grok’s auto-generated nudes avclub.com. Indeed, some commentators have urged high-profile targets like Swift to take Musk to court and “make the world safer for other women and girls” avclub.com. So far, there’s no public indication of legal action from Swift’s camp or others – but the calls for accountability are growing louder.

Even outside the Grok controversy, 2024 saw a major public outcry over AI-generated porn on X. In one incident, deepfake pornographic images of Taylor Swift spread wildly on the platform, with one fake photo garnering 47 million views before removal theguardian.com. Fans mobilized en masse to report the images, and even the White House weighed in, calling the situation “alarming” theguardian.com. Swift’s case is exceptional only in the attention it received; countless women (famous or not) have seen their likenesses turned into explicit content that platforms fail to swiftly remove theguardian.com theguardian.com. The public anger around these incidents – and now around Grok’s built-in NSFW mode – reflects a growing consensus that AI tools enabling sexual deepfakes are crossing ethical lines.

The Rapid Rise of AI-Generated NSFW Content and Deepfakes

The Grok episode is the latest chapter in a troubling trend: AI-generated porn (“deepfake porn”) has exploded in prevalence over the past few years. The phenomenon first gained infamy back in 2017, when hobbyists on Reddit began using early deep-learning algorithms to swap celebrity faces onto porn actors’ bodies. By 2018, so-called “deepfake” videos of Gal Gadot, Emma Watson, Scarlett Johansson and others were proliferating on adult sites – prompting bans from platforms like Reddit, Twitter and Pornhub on such non-consensual content theguardian.com theverge.com.

Despite those early bans, the deepfake porn industry moved to the shadows and continued to grow. By late 2019, a landmark report from cybersecurity firm Deeptrace found that 96% of all deepfake videos circulating online were pornographic, non-consensual face-swaps, almost exclusively featuring female targets regmedia.co.uk. The top four deepfake porn websites studied had racked up over 134 million views on videos targeting “hundreds of female celebrities worldwide.” regmedia.co.uk This imbalance was stark: the victims were overwhelmingly women, and the consumers overwhelmingly interested in women as sexual objects. “99% of deep fake sex videos involve women, usually female celebrities,” law professor Danielle Citron noted, highlighting how these creations “make you a sexual object in ways you didn’t choose” nymag.com. There’s “nothing wrong with pornography as long as you chose it yourself,” Citron added – the horror of deepfakes is that these women never chose to have their likeness used in explicit scenes nymag.com.

Early on, celebrity deepfakes dominated, but now ordinary individuals are increasingly targeted. In January 2023, the Twitch streaming community was rocked by a scandal when popular streamer Brandon “Atrioc” Ewing accidentally revealed he had been viewing a deepfake porn website that sold explicit videos of female streamers – editing their faces onto porn performers’ bodies polygon.com. The women (some of whom were Atrioc’s personal friends) were devastated and faced waves of harassment once the deepfakes came to light polygon.com. One victim, streamer QTCinderella, gave a tearful statement decrying the violation and later helped organize legal efforts to get such content taken down polygon.com polygon.com. The “Twitch deepfake” scandal underscored that you don’t have to be a Hollywood star to have this happen – any person with images online is potentially vulnerable to being “strip-searched” by AI against their will.

The rise of user-friendly AI tools has only accelerated the trend. In 2019, an app called DeepNude briefly went viral for using AI to “undress” photos of women at the click of a button, producing fake nudes onezero.medium.com. Although DeepNude’s creator shut it down amid public backlash, the genie was out of the bottle. By 2023–2025, open-source image generators (like Stable Diffusion derivatives) and dedicated deepfake services have made it trivial for anyone with a few photos and minimal tech savvy to create nude or sexual images of others. Some forums openly trade AI-generated nudes of women pulled from social media. As one Twitch streamer victim lamented after discovering pornographic fakes of herself, “This has nothing to do with me. And yet it’s on here with my face.” theguardian.com The sense of violation and powerlessness is palpable among those who have been “digitally victimized.”

In short, AI has democratized the ability to create pornographic fakes, and that capability has been disproportionately weaponized against women. It’s not just celebrities who are fair game – it’s anyone, from journalists and activists (who may be targeted to intimidate or discredit) to ex-partners and private individuals (targeted by vindictive creeps or for extortion schemes). The advent of features like Grok’s Spicy mode – which put an official, user-friendly stamp on what was previously an underground practice – signals that NSFW generative AI has truly entered the mainstream, dragging all its ethical baggage with it.

Ethical, Legal, and Societal Implications of AI-Generated Porn

The ethical outrage over deepfake porn is broad-based. At its core, creating or sharing sexual content of someone without their consent is a profound violation of privacy, dignity, and autonomy. As Citron and other ethicists argue, this is more than just image-based abuse – it’s a form of sexual exploitation. Victims describe feelings of “helplessness, humiliation, and terror” knowing that strangers (or abusers) are watching fake videos of them in sex acts they never did. It can amount to a virtual form of sexual assault, leaving lasting trauma. Unsurprisingly, women and girls bear the brunt: “Non-consensual intimate deepfakes” are “a current, severe, and growing threat, disproportionately impacting women,” researchers conclude sciencedirect.com.

There’s also a misogynistic undercurrent to much of this content. Experts note that deepfake porn is often used as a weapon to degrade women who are in positions of power or who reject someone’s advances. “AI-generated porn, fueled by misogyny, is flooding the internet,” The Guardian observed amid the Taylor Swift deepfake furor theguardian.com. The very act of stripping a woman naked via AI can be seen as an attempt to put her “in her place.” “It’s men telling a powerful woman to get back in her box,” as one observer described the Swift incident’s vibe. Whether it’s anonymous trolls churning out nude images of a female politician, or an obsessed fan making fake sex tapes of a pop star, the message is similar – a form of digital objectification and intimidation.

Beyond individual harm, the societal implications are sobering. If anyone can be superimposed into pornography, visual media can no longer be trusted. Deepfakes threaten reputational damage and extortion at scale. Women in the public eye may self-censor or retreat from online engagement for fear of being targeted. There’s also a chilling effect on speech: imagine a journalist critical of a regime finding herself the subject of a realistic fake sex video circulated to disgrace her. In the aggregate, the normalization of “designer porn” featuring unwilling participants raises questions about consent, sexploitation, and the commodification of human likeness. Even for consensual adult entertainment, some worry that AI-generated performers could replace real models – but when those AI performers are wearing real people’s faces stolen from Facebook or Instagram, the consent line is clearly crossed.

Free Speech vs. Privacy: A tension exists between those who call for outlawing all pornographic deepfakes and free-speech advocates concerned about overreach. Could a celebrity deepfake ever be considered legitimate parody or art? In theory, yes – satire and parody are protected expression, even using public figures’ likeness. Some defenders of AI tech note that morphed images of public figures have long been part of popular culture (e.g. Photoshopped magazine covers), arguing that knee-jerk criminalization could chill creativity. However, even most free-speech scholars draw a line at non-consensual sexual depictions. The harm to the individual is so intense and personal that it arguably outweighs any public interest. Legal experts point out that defamation or harassment laws might cover some cases, but not all. There’s a growing consensus that new legal safeguards are needed to specifically address deepfake porn, without undermining legitimate expression apnews.com apnews.com. Crafting those laws tightly – to punish clear abuses while not sweeping up satire or consensual erotica – is a challenge policymakers are now grappling with apnews.com apnews.com.

Current Developments (as of August 2025): From Tech Backlash to New Laws

The controversy around Grok’s “Spicy” mode arrives at a moment when governments and platforms worldwide are finally moving to combat the epidemic of AI-generated intimate imagery. Here are some of the latest developments:

  • National Outrage and Activism: The viral incidents involving Taylor Swift and others have galvanized public opinion. Even the U.S. White House commented on the Swift deepfakes, as noted, calling them “alarming” theguardian.com. Advocacy groups like the National Center on Sexual Exploitation have been outspoken, condemning Musk’s xAI for “furthering sexual exploitation by enabling AI videos to create nudity” and urging the removal of such features time.com. “xAI should seek ways to prevent sexual abuse and exploitation,” said NCOSE’s Haley McNamara in a statement, reflecting a broader push from civil society time.com.
  • Polls Show Overwhelming Public Support for Bans: Recent surveys show the public is firmly on the side of stricter regulation. A January 2025 poll by the Artificial Intelligence Policy Institute found 84% of Americans support making non-consensual deepfake porn explicitly illegal, and similarly want AI companies to “restrict [AI] models to prevent their use in creating deepfake porn.” time.com. A 2019 Pew Research poll likewise found about three-quarters of U.S. adults favor limits on digitally altered video/images time.com. In short, voters across the spectrum appear to want action against this form of abuse.
  • New Laws and Bills: Lawmakers have heard the call. In the U.S., the Take It Down Act was signed into law in May 2025, marking the first federal legislation targeting deepfake pornography time.com. This bipartisan law makes it illegal to “knowingly publish or threaten to publish” intimate images without consent – including AI-created deepfakes – and requires platforms to remove such material within 48 hours of a victim’s notice apnews.com apnews.com. Penalties are stiff, and the law empowers victims to get content taken down swiftly apnews.com apnews.com. “We must provide victims of online abuse with the legal protections they need, especially now that deepfakes are creating horrifying new opportunities for abuse,” said Senator Amy Klobuchar, a co-sponsor, calling the law “a major victory for victims of online abuse.” apnews.com apnews.com Even before the federal law, over 15 U.S. states had outlawed the creation or distribution of explicit deepfakes (often by updating “revenge porn” statutes). Now, a unified federal standard is emerging. Other countries are moving in tandem. The U.K. government, for instance, criminalized the sharing of deepfake pornography in its Online Safety Act (effective early 2024), and is advancing legislation to criminalize even the making of sexually explicit deepfakes without consent hsfkramer.com hsfkramer.com. In January 2025 the UK re-introduced a proposal to make creating deepfake nudes illegal, underscoring it as a “landmark development for the protection of women and girls.” hsfkramer.com hsfkramer.com Australia passed a law in 2024 banning both the creation and distribution of deepfake sexual material, and South Korea has gone so far as to propose criminalizing even the possession or viewing of such deepfake porn (not just its production) hsfkramer.com. The global trend is clear: non-consensual AI sexual images are being seen as a crime. Lawmakers acknowledge that these images “can ruin lives and reputations,” as Klobuchar put it apnews.com, and are taking action – though free-expression watchdogs like EFF caution that poorly drafted laws could over-censor or be misused apnews.com apnews.com.
  • Tech Platform Policies: Major tech platforms have started updating their policies (at least on paper) to address AI-created sexual content. Facebook, Instagram, Reddit, and traditional Twitter all officially ban non-consensual intimate imagery, including deepfakes theguardian.com theverge.com. Pornhub and other adult sites also instituted bans on AI-generated content featuring real individuals without consent as early as 2018 theguardian.com theverge.com. In practice, enforcement remains spotty – a determined user can still find or share illicit deepfakes on many platforms. However, there are signs of progress: for example, after the Swift incident, X (Twitter) did eventually respond by blocking searches for her name to stop the spread theguardian.com theguardian.com. Reddit not only bans deepfake porn, it shut down entire communities that traded such material. YouTube and TikTok have policies forbidding AI-manipulated explicit images as well. The challenge is scale and detection – which is where new tech is being applied.
  • Detection and Safeguards: A growing cottage industry of tech solutions aims to detect and remove deepfake porn. AI firms like Sensity (formerly Deeptrace) and startups like Ceartas are developing detection algorithms that scan the internet for a person’s face in porn content and flag matches polygon.com polygon.com. In fact, after the Twitch scandal, Atrioc partnered with Ceartas to help the victimized streamers: the company used its AI to locate deepfake content of those women and file DMCA takedown requests polygon.com polygon.com. OnlyFans, a platform with a vested interest in protecting creators, has also enlisted such tools to police fake content of its models polygon.com. There is also work on embedding watermarks or metadata into AI-generated images to help identify fakes, as well as proposals to require cryptographic authentication of real images (so that unlabeled images can be assumed fake). Moreover, the European Union’s AI Act (agreed in 2024) includes provisions that developers of deepfake tools must ensure outputs are clearly labeled as AI-generated bioid.com. Several U.S. states (and the EU) are considering mandates that any AI-altered content be accompanied by a disclosure when published cjel.law.columbia.edu bioid.com. While such labels won’t stop malicious actors from editing them out, they represent an attempt to establish norms of transparency around synthetic media.
  • Platform vs. Musk’s X: It’s worth highlighting how unusual Musk’s approach with X and xAI is. While most platforms are tightening restrictions, Musk has essentially loosened them, courting a user base that desires “edgy” AI capabilities. X not only hasn’t banned Grok’s outputs; it’s the home of Grok. This divergence has put X at odds with many experts. In August 2024, a group of Democratic lawmakers specifically cited Musk’s Grok in a letter to regulators, warning that lax policies on deepfakes (including explicit ones of figures like Kamala Harris or Taylor Swift) could wreak havoc in elections and beyond time.com. Musk appears to be betting that catering to the demand for AI-generated erotica (and even AI chat companions that flirt or strip, as xAI’s new features demonstrate time.com) will bring in revenue and users. But the backlash – legal, social, and potentially financial (via advertiser concerns) – may tell a different story in the long run.

Policy Proposals and the Path Forward: Can We Rein in Deepfake Porn?

The consensus among policymakers and ethicists is that multi-pronged action is needed to tackle AI-generated NSFW content. Key proposals and ideas include:

  • Stronger Laws and Enforcement: As noted, laws like the Take It Down Act are a start. Experts suggest further refinements, such as making the act of creating a fake sexual image of someone without consent a crime, not just distributing it. (The UK is heading this direction hsfkramer.com hsfkramer.com.) Clear legal penalties for perpetrators – and for those who knowingly host or profit from such content – can act as a deterrent. Importantly, any legislation must be carefully scoped to avoid unintentionally criminalizing consensual erotic art or legitimate political satire apnews.com apnews.com. Civil remedies are also vital: victims need easy avenues to sue for damages and obtain court orders to remove content. Many advocates want to see carve-outs in Section 230 (the U.S. law shielding platforms from user content liability) so that websites can be held responsible if they don’t respond to takedown requests for deepfake porn. This would pressure platforms to be far more vigilant.
  • Technology Guardrails: On the development side, proposals suggest that AI model creators should build in preventative safeguards. For instance, companies could train content filters to detect when a user prompt involves a real person’s name or likeness and block any explicit output involving that person. (Some AI image generators already refuse prompts that appear to reference private individuals or produce nudity of public figures – xAI’s Grok is an outlier in not doing so avclub.com.) Another idea is requiring consent verification for explicit generations: e.g. an AI service might only generate a nude image if the user proves the subject is themself or a consenting model. Of course, bad actors could simply use open-source models without such filters, but if the major platforms adopt strict guardrails, it could curb the mainstream spread. Age verification is also a concern – Grok’s only check was an easily-bypassed birthdate prompt gizmodo.com – so there are calls for more robust age gating to ensure minors (who are often targets of bullying via fake nudes) can’t use these tools or be depicted by them.
  • Research and Detection: Governments are funding research into deepfake detection, and companies are collaborating on standards for authenticating media. The goal is to make it easier to quickly identify and remove fake porn when it pops up. However, detection will always be a cat-and-mouse game as AI fakes get more sophisticated. Some experts believe the focus should shift to preventing harm (through legal penalties and education) rather than hoping for a technical “fake detector” that catches everything. Still, advancements in AI for good – such as better image hashing to track known fakes, or tools for individuals to find if their image has been misused – will play a role in mitigation.
  • Platform Accountability: Advocacy groups urge that social media and adult content platforms must proactively police AI porn content. This could mean investing in content moderation teams skilled in spotting deepfakes, cooperating with law enforcement on takedown orders, and banning repeat offenders who create or share non-consensual material. Some are also calling for opt-out or registry systems, where individuals can register their likeness (or their children’s) and platforms must ensure no AI content depicting them is allowed – though administratively this would be challenging to enforce. At minimum, swift response protocols – like the 48-hour removal requirement in the U.S. law apnews.com – need to become standard practice on all platforms globally.
  • Education and Norms: Finally, part of the solution lies in shifting social norms. Just as society came to broadly condemn “revenge porn” and recognize it as abuse, the hope is that deepfake porn becomes universally stigmatized. If the average person understands the harm and refuses to share or consume such content, the demand will diminish. Tech ethicists stress the importance of media literacy – teaching people that seeing isn’t always believing, and that a lurid photo of Celebrity X might be fake. Empowering younger generations to critically navigate a world of AI-altered media will be crucial. So too will campaigns to inform would-be perpetrators that creating these fakes isn’t a prank – it’s a serious violation with potentially criminal consequences.

Conclusion

The emergence of Grok’s “Spicy Mode” has thrown gasoline on an already raging fire around AI and sexually explicit deepfakes. Elon Musk’s AI tool, by making it easy and even “official” to generate nude celebrity lookalikes, has sparked swift backlash – and shone a spotlight on the wider deepfake porn crisis. From the halls of Congress to the comment threads on tech forums, there is growing agreement that something must be done to protect individuals (especially women) from this technology’s darker applications.

As we’ve seen, AI-generated NSFW content is not an isolated novelty – it’s an escalating societal challenge. It pits creative freedom and technological innovation against privacy, consent, and safety. The genie isn’t going back in the bottle, but through smart policy, responsible tech development, and cultural change, we can hope to contain the harm. The coming months and years will likely bring more lawsuits, more laws, and improved AI safeguards. Musk’s Grok may either adapt under pressure or become a cautionary tale of an AI venture that ignored ethical boundaries at its peril.

For now, the message from experts and the public alike is resounding: deepfake porn is a line that AI must not cross. And if companies like xAI won’t draw that line themselves, regulators and society are increasingly prepared to draw it for them. As one technology ethics advocate put it, “Having your image turned into porn without consent is devastating – it’s high time we treat it as the serious abuse it is, and not as an inevitability of tech.” The debate is no longer about whether action is needed, but about how quickly and effectively we can rein in AI-fueled sexual exploitation, before more lives are upended by the next “Spicy” innovation.

Sources:

  • Novak, Matt. “Grok’s ‘Spicy’ Mode Makes NSFW Celebrity Deepfakes of Women (But Not Men).Gizmodo, Aug. 6, 2025 gizmodo.com gizmodo.com.
  • Weatherbed, Jess. “Grok’s ‘Spicy’ video setting instantly made me Taylor Swift nude deepfakes.The Verge, Aug. 5, 2025 theverge.com.
  • Carr, Mary Kate. “Elon Musk is practically begging for a Taylor Swift lawsuit with Grok AI nude deepfakes.AV Club, Aug. 7, 2025 avclub.com avclub.com.
  • Saner, Emine. “Inside the Taylor Swift deepfake scandal: ‘It’s men telling a powerful woman to get back in her box’.The Guardian, Jan. 31, 2024 theguardian.com theguardian.com.
  • Clark, Nicole. “Streamer who incited Twitch deepfake porn scandal returns.Polygon, Mar. 16, 2023 polygon.com polygon.com.
  • Patrini, Giorgio. “The State of Deepfakes.” Deeptrace Labs Report, Oct. 2019 regmedia.co.uk.
  • Citron, Danielle. Interview in NYMag Intelligencer, Oct. 2019 nymag.com.
  • Ortutay, Barbara. “President Trump signs Take It Down Act, addressing nonconsensual deepfakes. What is it?AP News, Aug. 2025 apnews.com apnews.com.
  • Burga, Solcyre. “Elon Musk’s Grok Will Soon Allow Users to Make AI Videos, Including of Explicit Nature.TIME, Aug. 2025 time.com time.com.
  • Herbert Smith Freehills Law Firm. “Criminalising deepfakes – the UK’s new offences…,” May 21, 2024 hsfkramer.com hsfkramer.com.

Tags: , ,