Campaigners have criticised the UK government for delaying the implementation of a law that would criminalise the creation of non-consensual sexualised deepfake images, following reports of widespread misuse of AI tools on social media.
The backlash has intensified after users of Elon Musk’s AI chatbot Grok have used it to digitally remove clothing from images, with one woman telling the BBC that over 100 sexualised images of her have been generated. Grok can be accessed through its website, app, or by tagging “@grok” on the X platform.
While it is already illegal in the UK to share deepfakes of adults without consent, legislation passed in June 2025 to make creating or commissioning such images a criminal offence has yet to be brought into force. Experts say this delay leaves a legal gap that allows harmful content to be generated with little consequence.
Andrea Simon, director at End Violence Against Women (EVAW), said the government’s inaction “puts women and girls in harm’s way.” She added that non-consensual sexual deepfakes violate women’s rights, cause long-term trauma, and can force victims to self-censor online, limiting their freedom of expression.
On Tuesday, Technology Secretary Liz Kendall called on X to act urgently, describing the situation as “absolutely appalling.” Regulator Ofcom confirmed it has contacted X and its developer xAI and is investigating the matter. Both Kendall and Downing Street have backed regulatory intervention, with the Prime Minister’s office stating that “all options remain on the table.”
The Ministry of Justice said that sharing intimate images without consent is already an offence and emphasised that the government has introduced legislation to criminalise their creation. Legal experts note that while current law covers revenge porn and images depicting children, the Data (Use and Access) Act 2025 also criminalises commissioning deepfakes, though the provision has not yet been activated.
Conservative peer Baroness Owen, who campaigned for the law, said the government has “repeatedly dragged its heels” and called for urgent action. Cross-bench peer Baroness Beeban Kidron added that rapid technological advances make it imperative to enforce the law without delay.
Several women told the BBC of the personal impact of AI-generated sexualised images. One user, Evie, said the images made her feel “disgusted” and forced her to limit her online presence. Dr Daisy Dixon described the images as leaving her feeling “humiliated,” comparing the repeated tagging of altered images to a form of assault.
X said anyone using Grok to create illegal content would face the same consequences as if they had uploaded illegal material, and that it removes content and suspends accounts when necessary.
Campaigners say urgent enforcement is needed to protect women and girls from AI-assisted abuse and to prevent social media platforms from becoming tools for harassment.