The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.
AI could be changing our brains in ways we don’t even realise
Students are increasingly relying on AI, and it is making its way into our most elite universities. But might we be undermining our own ability to think, asks Andrew Griffin


Everyone is cheating. Earlier this year, research showed that almost every student was relying on AI tools such as ChatGPT for their work: 88 per cent of students polled had used it for assignments, up from 53 per cent last year. (The numbers are largely similar in the US.) Anecdotally, the vast number of people already using AI in their work becomes its own kind of justification: if everyone else is cheating, why wouldn’t you?
Because it is making you think less, and less well, the research shows, though it is still limited. Earlier this year, researchers divided participants into three groups and asked them to write an essay. Some were given help from a large language model (LLM), such as ChatGPT; some were allowed access to Google; some didn’t have any help at all. They then studied the three groups in a variety of ways.
As they wrote, their brains worked differently. The more help people got, the less active parts of their brains were. Those who had been given help by AI were less good at quoting their essays. Researchers cautioned that the work is early and relatively limited – and explicitly warned against using it to suggest that people were being made more stupid – but it at the very least suggested “the pressing matter of exploring a possible decrease in learning skills” from using large language models in education.
For thousands of years, thinkers have been worried that technology could undermine memory and understanding. The first of those technologies was writing itself. In The Phaedrus, Socrates warns that the written word could undermine memory and that text might only make people seem like they have knowledge, rather than actually having it.
Computers have only made those concerns more pressing. In a paper in 2011, researchers identified the “Google effect”, in which having information readily available at our fingertips seemed to make it less available inside our heads. Even 15 years ago, researchers were finding that people being asked to recall things were primed to think about computers, and having the expectation of that information being readily available meant they were less likely to actually remember it. “The internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves,” they wrote.
One big fear about the impact of AI on education is that it doesn’t feel like AI is making us stupid: using it feels like learning. In an article published in the summer, information systems researcher Aaron French noted that talking to AI “can artificially inflate one’s perceived intelligence while actually reducing cognitive effort”. He pointed to the Dunning-Kruger effect – which suggests that a little knowledge is a dangerous thing, because you feel empowered with information but don’t yet have enough of it to be aware of what you don’t know – and warned that wrongly using AI can leave people sat in that dangerous spike of confidence, what researchers have called the “peak of Mount Stupid”.
Late last month, Anastasia Berg – who teaches philosophy at the University of California, Irvine – noted that many see a divide between “illicit uses of AI” such as having it write a whole essay, and “innocent auxiliary functions” such as helping with the outline of that essay. But, she noted, deciding what to write about is an indispensable skill. “No aspect of cognitive understanding is perfunctory,” she wrote.
Still, AI is arriving in universities, whether those running them like it or not. Earlier this year, Oxford became one of a number of universities to make an official deal with OpenAI, the creators of ChatGPT, after what it said was a “successful year-long pilot”. Students get access to a special version of ChatGPT that protects data and includes other safeguards; OpenAI gets to suggest that AI is becoming more central to learning.
Much of the discussion around Oxford's embrace of AI was explicitly in the context of its students having done so already: the choice isn't between essays being written with ChatGPT or not, but about whether the university officially recognises it. "‘We know that significant numbers of staff and students are already using generative AI tools," noted Anne Trefethen, the University of Oxford’s Pro-Vice-Chancellor for Digital, when the project was announced. The use of AI has taken on its own force, and many academics suggest that it is better to teach students to use it well, rather than to teach them without using it.
"University-wide access to ChatGPT Edu will support the development of rigorous academic skills and digital literacy, so that we prepare our graduates to thrive and lead by example in an AI-enabled world," said Freya Johnston, pro-vice-chancellor for education at Oxford University. “Generative AI is also helping us to explore new ways of engaging with students, alongside our renowned face-to-face teaching and tutorial model, which emphasises critical thinking and contextual analysis."
Oxford's own rules don't rule out generative AI in research, but require that users "remain ultimately responsible for GenAI content used in research". It tells them that they should keep "an awareness of the tools’ limitations, such as hallucinations, or social biases that may be embedded in training data, which could perpetuate misrepresentation of social categories, protected groups, or historical inaccuracies" as well as requiring them to be aware of other dangers and be transparent about their use of the tools.
Many universities have similar rules. Earlier this year, New York Magazine – in a piece headlined “everyone is cheating their way through college” and which claimed that the technology has “unravelled the entire academic project” – reported on a student who happily flouted Columbia University’s rules on not using AI without permission. Columbia too has a tie-up with OpenAI, it noted.
In that world, students might have to learn differently – and that might include learning how to relate to artificial intelligence. Kaitlyn Regehr, an associate professor in digital humanities at University College London, has warned that the growth of artificial intelligence should bring with it a specific kind of education about “how much of our thinking, or more specifically the development of our thinking, is acceptable to outsource”. “What is the responsibility to shift and to supplement through our education system, throughout parenting, in order to support young people?” she asked an event earlier this year.
That could mean a project similar to PE classes in schools. “With the advent of the car, and more sedentary vocations, a boom in research around physical health was born,” she said. “And because we were not moving, because the technology did that for us, we needed to start to artificially move.
“We saw gym culture emerge, and PE class. Because people weren't moving, because technology was moving for us. I think a really helpful analogy I hope for parents [...] is a gym for the AI age. A social, emotional gym. A social, emotional PE class. What do we now need to supplement, if AI is increasingly doing things for us, and children are not having to move their minds?”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments