Elon Musk’s inhuman AI is stripping women of their most fundamental rights
Editorial: It is entirely right that Liz Kendall, the technology secretary, should put the safety of women and girls first and protect them from the ‘hyper-pornographied slop’ pumped out by the Silicon Valley giants
Despite some typically overwrought, misguided and, quite possibly, non-human discussions on X about whether the British government is about to follow Malaysia and Indonesia by banning Grok, the platform’s AI-powered assistant, it feels an inherently unlikely outcome.
The present row about sexualised deepfake images is degenerating, unnecessarily, into a paranoid one about free speech. It need not be thus.
To be clear, since Elon Musk took over Twitter, renamed it X and added the Grok chatbot function, the once relatively anodyne social media platform has descended into a cesspit of every kind of racism, misogyny, social and sexual hate, misinformation, disinformation, political manipulation, election interference, pornography, conspiracy theories and much else.
It is now effectively an unmoderated and deeply depressing showcase of the worst of humanity and, indeed, the troubling extension of humanity that is artificial intelligence. It was surely only a matter of time before AI functionality would be perverted to produce images – overwhelmingly of women and children – “undressed”, or worse. It is distressing to those who find themselves the non-consensual victims of this phenomenon. They feel violated and humiliated, and no one should deride them for that.
However, “free speech absolutist” as Mr Musk claims to be, even he has his limits, and surely isn’t in favour of indecent images of children on demand. To be fair, his company has shown some willingness to face the issue. X has said that: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
The anonymity of people using Grok and X to make and publish such images is to be removed, so that they can, in principle, be pursued by the authorities and dealt with by what remains of X’s staff. It seems a stronger deterrent than the present feeble system of self-regulation via “community notes”.
If Mr Musk keeps his word, then this tide of bogus, intimate and humiliating images may start to be pushed back – as much of it is already illegal. The very creation of such images is already banned in the UK. Under the Data (Use and Access) Act 2025, it is a criminal offence to create sexually explicit images of another person without their consent, even if the image has not been shared.
The spreading of vile images was rendered unlawful by the Online Safety Act, passed in 2023. Soon, as part of the government’s strategy on violence against women and girls, the AI programmes that can produce sexualised images, through “nudification”, will also themselves be outlawed. That, in theory, would include Grok, if it were left to its current devices. It is difficult to believe that Grok would be less of a boon to mankind if it stopped stripping women of their clothes and self-esteem.
It is entirely right, therefore, that Liz Kendall, the technology secretary, should put the safety of women and girls first, and bring in new laws as soon as possible to protect them, without losing the astonishing potential for good that AI can bring. It is up to Ofcom to determine if British laws are being broken and to take the appropriate action.
However, Mr Musk, true to form, has also said that such moves to control obscene images are because “they just want to suppress free speech”. This is as absurd as it is unfair, and Mr Musk must surely see, as the X statement indicates, that CSAM should have no place on X or xAI. Beneath the outrage and overheated commentary about freedoms, it actually commands wide support. Even JD Vance, hardly woke, is reportedly sympathetic to getting rid of “hyper-pornographied slop” online. Family values can trump free speech absolutism. Thus, there’s no need to ban Grok or X – merely to fix it, and, in the interests of fairness, to treat all social media channels and AI the same way.
There is, though, a growing feeling in the West, as the pace of technological change picks up, about the way children are exposed to X and other social media platforms. When infants can, in principle, access disturbing images online before they can walk, and soon enough find themselves bullied and attacked online, it seems reasonable to question the impact on the development of young minds.
Kemi Badenoch, normally a keen libertarian, is the latest mainstream leader to suggest that a prohibition be placed on the under-16s using all social media. This would be backed up by a ban on smartphones in schools. It may sound draconian, and it may prove impractical, but fortunately for all concerned, the Australian government has volunteered to be a global guinea pig in a national experiment to test the theory that the intellectual development, attention spans, behaviour and mental health of the young would be enhanced by the absence of social media.
It is analogous to restrictions in the sale to them of alcohol, tobacco and vapes, as well as illicit drugs, and even if the ban on social media might be harder to enforce, the Australian experience will prove instructive. Meantime, Mr Musk should be asked to follow his own rules and get child sexual abuse material off his platforms and out of Grok, at least so far as the UK is concerned. It seems reasonable that American companies in any sector, from Ford to McDonald’s to Goldman Sachs and indeed X, should abide by the laws of the countries in which they operate.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments
Bookmark popover
Removed from bookmarks