Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

‘I feel violated and dehumanised after X’s Grok AI stripped me naked’

Women say ‘objectifying’ pictures are being created of them without their consent – and accuse tech companies like Elon Musk’s as well as the government of not doing enough to protect them

Related video: Cathy Newman reacts to deepfake video of herself

Women say they have been left feeling “violated and humiliated” after “dehumanising” images of them have been created by users of Grok AI without their consent.

The pictures, created by X’s in-built artificial intelligence, have seen users’ clothes entirely digitally removed, or partially, such as being put in a bikini; with their body parts digitally altered, including their breasts being enlarged; and placed in sexualised contexts by Grok.

Evie, 22, who did not wish to share her surname, said she has been bombarded with more than 100 sexualised images of herself in less than a week, including one that digitally stripped her naked.

“I just feel like I’ve been violated,” she told The Independent. “I’m just so shocked there are people out there who can do this – and that there are so many people who will defend it and come up with excuses for this when it’s blatantly a huge violation.”

Among the digitally altered pictures of herself that Dr Daisy Dixon said she has been confronted with is an especially “frightening” one in which the Grok user “increased my breast size by like 1,000 per cent”.

Dr Dixon, a lecturer, said: “To even just be put in a bikini and having my image manipulated like that feels like an attack on your sense of self, there’s a kind of violence to it. You don’t have ownership over your body – it’s objectifying.”

She highlighted the additional “power move” being deployed, which is that Grok users are not only digitally altering images but then posting them back to their victims.

Technology secretary Liz Kendall has now demanded that Elon Musk’s xAI urgently deal with its chatbot being used to create the “absolutely appalling” sexualised deepfake images. Her calls come after media watchdog Ofcom said on Monday that it had made “urgent contact” with the technology company after serious concerns were raised over Grok producing undressed images of people and sexualised images of children.

The technology secretary has demanded that Elon Musk’s xAI urgently deal with its chatbot being used to create the ‘absolutely appalling’ sexualised deepfake images
The technology secretary has demanded that Elon Musk’s xAI urgently deal with its chatbot being used to create the ‘absolutely appalling’ sexualised deepfake images (AFP/Getty)

Musk has been accused of mocking the issue, which appears to be seen in his reposting of an image of a toaster with a digitally added bikini that is captioned “Grok can put a bikini on everything”, to which the X owner replied, “Not sure why, but I couldn’t stop laughing about this one,” alongside laughing emojis.

A day later, on Saturday, Musk said: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” X reposted this and added: “We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”

The company referred The Independent to this post in response to our request for comment.

In responses from X seen by The Independent, women have been told “there were no violations of the X rules in the content you reported” after they complained about degrading images of themselves that have been created by the company’s AI. Some of this content has since been removed for unclear reasons.

Dr Daisy Dixon called this yet ‘another issue of misogyny’ as she joined calls for immediate action to be taken to tackle it
Dr Daisy Dixon called this yet ‘another issue of misogyny’ as she joined calls for immediate action to be taken to tackle it (Dr Daisy Dixon)

As of Tuesday morning, Jessaline Caine, 25, reported that Grok was still obliging requests that digitally alter a photo of her fully dressed as a three-year-old girl to put her in a string bikini, which also appeared to give her breasts. “It just disgusted me so much... This is capable of being used as a tool of destruction,” said the 25-year-old, who is a child sexual abuse survivor. “My heart really reaches out to those poor girls and boys who are in the situation I was in, and how a new layer of evil has been added to their lives.”

Ms Caine, who works in planning, said she has also faced countless versions of her social media profile picture that have been digitally altered by Grok users to put her in a string bikini. “It’s so dehumanising,” she said. “Women are being objectified and sexualised... You’re being humiliated online.” Speaking of the UK’s Online Safety Act, which pledges to provide protections for women and girls online, she said: “I don’t feel protected, I feel embarrassed.”

When asked if she feels at all protected by technology companies or the authorities, Evie, who is a photographer, replied, “Not at all,” as she accused firms of “prioritising themselves over the safety of women and users”. Dr Dixon called this yet “another issue of misogyny” as she joined calls for immediate action to be taken to tackle it.

Evie, 22, said she has been bombarded with more than 100 sexualised images of herself in less than a week
Evie, 22, said she has been bombarded with more than 100 sexualised images of herself in less than a week (Evie)

Women’s rights campaigners, including Refuge, Women’s Aid and Womankind Worldwide, have said they are “deeply concerned” by the reports, warning that the “disturbing” rise in AI intimate image abuse, facilitated by platforms such as Grok, has “dangerous” consequences for women and girls, including to their safety and mental health. They are consequently calling for technology companies to implement effective safeguards and prevent perpetrators from causing harm – and for the government to hold them to account.

Emma Pickering, head of technology-facilitated abuse and economic empowerment at charity Refuge, said: “As technology evolves, women and girls’ safety depends on tighter regulation around image-based abuse, whether real or deepfake, as well as specialist training for prosecutors and police. Women have the right to use technology without fear of abuse, and when that right is violated, survivors must be able to access swift justice and robust protections.”

Under the Online Safety Act, social media firms must prevent and remove child sexual abuse material when they become aware of it. However, the law surrounding deepfakes of adults is more complicated. Legislation to criminalise the sharing of non-consensual deepfake images has progressed through parliament but not yet come into effect, according to Refuge.

Meanwhile, the charity says the sharing of real intimate images without consent is already illegal, but in practice, this law is not being effectively enforced.

Jessaline Caine, 25, says ‘women are being objectified and sexualised’
Jessaline Caine, 25, says ‘women are being objectified and sexualised’ (Jessaline Caine)

A government spokesperson said: “Sexually explicit deepfakes created without consent are degrading and harmful. We refuse to tolerate the violence against women and girls that stains our society. That is why we have legislated to ban their non-consensual creation, ensuring that offenders face the appropriate punishments for this atrocious harm.

“We have also made intimate image abuse and cyberflashing priority offences under the Online Safety Act, including where images are AI-generated. This means platforms must prevent this content from appearing online in the first place and swiftly remove it if it does.”

An Ofcom spokesperson said: “We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in