Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

IN FOCUS

How AI deepfakes are humiliating teachers and pupils in British schools every day

Beyond Grok, nudifying technology and pornographic deepfakes are ripping through classrooms, leaving teachers and children traumatised and schools unsure how to respond. With suspensions rising and prosecutions rare, Chloe Combi asks who is protecting them in the age of AI?

Video Player Placeholder
‘Abusive’ AI undressing trend is taking over X thanks to Elon Musk’s Grok, analysis reveals

Twelve years ago, a friend of mine had just started her first teaching post at a well-respected grammar school. A huge perk of the job was that, so young teachers could afford to live in the South East and get to school easily, the school owned several houses that staff could rent for a heavily subsidised fee.

One Saturday morning during a heatwave, there was a knock at the door. The house was inhabited mostly by young to mid-twentysomethings and had a work-hard/play-hard atmosphere, so a knock at dawn on a Saturday was both unwelcome and out of the ordinary.

At the door stood the deputy head and the chair of governors. It was immediately obvious this was not a cheerful breakfast visit. A pornographic video had been circulating among students, who were claiming that my friend was the woman in it. She was asked to sit through a few minutes of the incriminating footage and, to her horror, the adult film star bore a striking resemblance to her.

It wasn’t her, and she was able to prove this beyond reasonable – though humiliating – doubt. But the damage was done. The shame was too much, and the burden of it felt heavily on her. She handed in her resignation and left a job she loved and was good at, despite having done absolutely nothing wrong.

Today, she works in recruitment in Australia. Reflecting on the experience, she says: “I don’t think I’ll ever quite get over it. Ultimately, the school was very apologetic and supportive, but how can you go back and work in a place where kids and staff have been discussing whether that was you in Gangbangs 3?”

Fast forward to 2026, and AI deepfake technology and “nudifying” apps are ripping through schools like a virus, and it is becoming harder to distinguish between what is real and what is fake all the time. Technology that makes this possible has existed for some time, but this week the AI feature Grok, on Elon Musk’’s platform X (formerly Twitter), was used by millions to generate non-consensual pornographic images and videos – predominantly of women and girls.

The prompts being fed to Grok were not along the lines of “make her bikini top fall off Carry On-style”, but commands such as “chain her to the bed”, “make her look scared”, “make her beg and whimper”, and far worse. At one point, Bloomberg analysis found that Grok was generating around 6,700 “sexually suggestive and nudifying” images per hour.

Millions have generated non-consensual pornographic images and videos on Elon Musk’s X AI feature Grok
Millions have generated non-consensual pornographic images and videos on Elon Musk’s X AI feature Grok (PA Archive)

In response, Ofcom has written to Elon Musk on an emergency basis in an attempt to stem the flow of indecent images and videos. Musk later posted a statement saying that anyone who asks the AI to generate illegal content would “suffer the same consequences” as if they had uploaded it themselves, but this came after he had publicly joked about some of the outputs.

A spokesperson for X said the company would take action against illegal content on the platform, “including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary”.

But many believe this approach is purely reactive. Critics have also pointed out that restricting access to the nudifying feature to paying X subscribers meant Musk was, in effect, profiting from other people’s – mostly women’s – pain and humiliation.

As Laura Bates, the author of The New Age of Sexism: How the AI Revolution Is Reinventing Misogyny, has been warning about for years, the technology to create uncannily realistic pornographic images and videos capable of causing profound personal and professional harm is now widely available and poorly regulated.

Casey*, 29, teaches at a private school in the West Country. On returning from the summer break, she discovered that a student had digitally inserted her into a video with the porn performer Riley Reid. The result was convincing - and deeply traumatising.

She says: “The two governors were older men who clearly had no understanding of what AI is, let alone what it can do. The tone they took was that they thought I’d done it. I had to spend my own time and money proving the video was fake. The boys who did it got a three-day suspension, the same punishment as vaping. I got no apology, and was told the video had spread across multiple school group chats.”

Still carrying the shame and trauma of the ordeal, like my friend 12 years ago, Casey is now considering leaving teaching altogether.

Posed by a model: British school girls are being inserted into pornographic images and videos by their classmates
Posed by a model: British school girls are being inserted into pornographic images and videos by their classmates (Getty Images)

Hannah*, 27, a teacher in London, had a similar experience. Students used deepfake technology to create explicit images of her, which were circulated around the school. In her case, senior management acted decisively: the police were called, one Year 11 pupil was permanently expelled, and criminal proceedings may follow. With Hannah’s consent, the school used the incident as a teachable moment, holding assemblies on online ethics and the law.

Thousands of children in the UK have now been inserted into pornographic images and videos, amounting to an outbreak of AI-generated child sexual abuse material – a nightmare for children, parents and teachers alike. While around 96 per cent of deepfake sexual content targets women and girls, boys are not immune either. One friend’s 12-year-old son was placed into a gay pornographic video by peers who had been bullying him for more than a year.

“It was such a new situation that the school had no idea what to do,” his father says. “I insisted the school call the police, even though they were reluctant. My son is so traumatised he’s refused to go back to school, and my wife is now considering leaving her job to homeschool him. The law just isn’t clear or swift enough, and people don’t understand how devastating deepfakes are for victims.”

Charlotte*, 15, agrees. She was placed into an extremely violent pornographic video by her classmates. Like Casey’s example, the perpetrators received only a three-day suspension, and the school discouraged her from pursuing legal action. Living in foster care, with limited adult support, the experience intensified her sense of isolation.

“I had to go back to class with the boys who did it, and everyone laughed at me,” she says. “I felt dirty, even though I hadn’t done anything wrong. Some days I don’t want to get out of bed.”

Data released by the Internet Watch Foundation in November 2025 shows reports of AI-generated child sexual abuse material more than doubled year-on-year, rising from 199 reports in 2024 to 426 in 2025, including a nine-fold increase in images depicting infants aged 0–2.

In October 2024, Hugh Nelson, 27, of Bolton, was sentenced to 18 years in prison after using AI and image-manipulation software to create sexual abuse images of real children. The court heard he exchanged and sold the images in encrypted online chatrooms, accepted commissions, and made around £5,000 from doing so.

Hugh Nelson was sentenced to 18 years in prison after using AI and image-manipulation software to create sexual abuse images of real children
Hugh Nelson was sentenced to 18 years in prison after using AI and image-manipulation software to create sexual abuse images of real children

This was the UK’s first major conviction for generating AI or computer-generated child sexual abuse content, and those on the frontline of child protection services believe that millions of images could exist that people will be wholly unaware of. Constantly playing catch-up with the developing technology, Governments are working with the AI industry and child protection organisations to ensure AI models cannot be misused to create synthetic child sexual abuse images. Possessing and generating child sexual abuse material is already illegal under UK law, both real and synthetically produced by AI, but improving AI image and video capabilities presents a growing challenge.

On 16 April 2024, the Ministry of Justice announced that individuals who create sexually explicit deepfakes will face prosecution under a new criminal offence. However, there is a major loophole. The organisation End Violence Against Women explains their reservations: “We are concerned that the threshold for this new law rests on the intentions of the perpetrator, rather than whether or not the victim consented to their images being used in this way.

“The government’s announcement indicates that creating a sexually explicit deepfake will be a criminal offence even if they have no intent to share it but purely want to cause alarm, humiliation or distress to the victim.” This grey area is already being fully exploited with perpetrators claiming they did it ‘accidentally’ or they thought the victim might think it was ‘funny’ or a ‘bit of a joke’ or even that they themselves were the victims of peer or outside pressure.

We live in an age where dehumanisation, exploitation and victimisation are par for the course or the cost of doing business for many industry and world leaders. And if Musk views this with a giggle and a shrug, it hardly sets a good example to teenagers who think making explicit AI pictures of classmates is also a bit of a laugh.

Vic Goddard, the chief executive officer of the Passamores Cooperative Learning Community, who shot to fame in the hit series Educating Essex, made a fascinating observation. His school, like many others, has gone completely phone-free and the benefits have been pronounced: digital bullying went down 85 per cent in the first week.

However, in making schools phone-free he also believes “schools have essentially outsourced digital problems kids are having because we’re no longer dealing with them in the school day. But this doesn’t mean they’ve gone away. AI problems and pornographic content and addiction remain, and many parents aren’t aware or struggle to deal with it.

He adds: “We better hope the government comes up with some legal guardrails quickly, because if we’re relying on Big Tech to do the right thing and protect kids, it simply isn’t going to happen.”

*some names have been changed

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in