Elon Musk’s Grok keeps creating violent and abusive images. Why can’t we stop it?
Artificial intelligence has enabled entirely new kinds of abuse – but regulators, governments and even the companies behind the new tools might be powerless to end it, writes Anthony Cuthbertson


When Ofcom posted a statement on X this week about Grok being used to undress people and sexualise children, Elon Musk’s AI chatbot responded with an image of the UK regulator’s logo in a bikini.
It was a stark demonstration of how regulatory oversight is failing to keep pace with the technology, with some fearing that artificial intelligence tools are now operating beyond meaningful control.
Grok is currently generating non-consensual sexualised images at a rate of one per minute, according to a review by AI content analysis firm Copyleaks. Separate research from the non-profit AI Forensics suggests that more than half of all AI-generated content on X is now of adults and children with their clothes digitally removed.
It is not the only harmful content being produced by the AI tool, which Musk launched in 2023 in response to “woke” AI models from competitors like Google and OpenAI.
Last year, Grok caused controversy by praising Adolf Hitler, sharing antisemitic tropes and calling for a second Holocaust. Musk’s xAI – the subsidiary that operates Grok – introduced an update to prevent such extremist content, though it continues to plague the platform.
“Non-consensual sexual imagery of women, sometimes appearing very young, is widespread rather than exceptional, alongside other prohibited content such as ISIS and Nazi propaganda – all demonstrating a lack of meaningful safety mechanisms,” Dr Paul Bouchaud, a researcher at AI Forensics, said in a statement shared with The Independent.
Musk has pledged to crack down on the abusive trend, posting on X: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” Grok itself has been limited to only subscriber stop X’s paid-for, premium tier.
A spokesperson for X said that the company will take action against illegal content on the platform, “including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
But many believe this approach is simply reactive, and does not address the causes of the issue. Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance, has called for stricter safety guardrails to be integrated into AI tools like Grok before they launch.
“What we’re seeing with Grok is a clear example of how powerful AI image-editing tools can be misused when safety and consent and built in from the start,” he said. “Allowing users to alter images of real people without notification or permission creates immediate risks for harassment, exploitation, and lasting reputational harm.
“AI systems like Grok should enforce strict prohibitions on sexualized transformations, automatically block attempts involving minors, and require explicit consent before any image of a real person can be edited. Platforms must also invest in real-time detection of manipulated content, clear labeling of AI-generated images, and fast, transparent takedown processes when abuse occurs.”
This approach would see social media companies treat AI misuse as a core trust and safety issue, rather than a content moderation challenge.
Ofcom said it will launch an official investigation based on X’s response to its “urgent” request for details about “what steps they have taken to comply with their legal duties to protect users in the UK.” The European Commission has also said that it is “very seriously” looking into complaints about explicit and non-consensual images on X.
A spokesperson for Ofcom told The Independent that anyone creating or sharing non-consensual intimate images with AI could face prosecution by law enforcement under the Online Safety Act that came into force last year.
Since the OSA passed into law in July, the UK regulator said it has launched investigations into more than 90 platforms, and issued a fine to an “AI nudification site” for non-compliance.
No action has yet been taken against AI tools that remove a person’s clothes and replace them with a bikini, though this could fall under the intimate image abuse section in the OSA, according to Ofcom.
The legislation states that these images include those where “all or part of the person’s genitals, buttocks or breasts would be exposed but for the fact that they are covered only with underwear.”
Musk claims the next version of Grok will be artificial general intelligence (AGI), meaning it matches or exceeds human intelligence. This could make it even more ungovernable if safety rules and regulations are not implemented and enforced effectively now.
“From [OpenAI’s] Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media,” Copyleaks CEO Alon Yamin toldThe Independent. “To that end, detection and governance are needed now more than ever to help prevent misuse.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments
Bookmark popover
Removed from bookmarks