Elon Musk’s AI tool Under Scrutiny for Generating Explicit Taylor Swift Videos Without Prompts
Elon Musk’s AI video generator, Grok Imagine, is facing criticism after reports claimed it could produce sexually explicit videos of Taylor Swift without being asked to do so.
According to The Verge, the platform’s “spicy” mode generated fully uncensored topless clips of the pop star in response to a non-explicit prompt, raising concerns over safety measures and bias in AI systems.
Clare McGlynn, a Durham University law professor and campaigner for laws banning pornographic deepfakes, said this behaviour suggested deliberate design choices rather than accidental bias. “This is not misogyny by accident – it is by design,” she argued.
The allegations come despite XAI – the company behind Grok Imagine – having a policy that forbids “depicting likenesses of persons in a pornographic manner.” The company has not yet commented publicly.
When testing the tool, The Verge’s Jess Weatherbed used the prompt “Taylor Swift celebrating Coachella with the boys.” While the AI initially generated a still image of Swift in a dress, switching to “spicy” mode quickly produced a video of her removing her clothes, without any request for nudity.
Gizmodo reported similar results for other high-profile women, although some attempts returned blurred videos or “video moderated” notices.
Concerns have also been raised about the platform’s compliance with new UK laws requiring robust age verification for sites offering explicit material. Weatherbed reported that Grok Imagine only asked for a date of birth, without additional checks.
Under current UK law, creating pornographic deepfakes is only illegal when it involves children or is used as revenge porn. However, an upcoming amendment – supported by the government but not yet in force – will criminalise all non-consensual pornographic deepfakes.
Baroness Owen, who sponsored the amendment, said the case underlined the urgency of implementing the change. “Every woman should have the right to choose who owns intimate images of her,” she said.
This is not the first time Swift has been targeted. In early 2024, explicit AI-generated images of her went viral on X and Telegram, prompting X to temporarily block searches for her name.
The UK’s media regulator, Ofcom, has said it is monitoring the risks posed by generative AI and will work to ensure platforms put safeguards in place to protect users, especially children.