Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Analysis

People are making themselves into AI caricatures – here’s why you might want to think twice

The viral trend of using AI to make a cartoon of yourself is fun, but it is also a reminder of how we may be handing over data in ways we might not understand, writes Andrew Griffin

Head shot of Andrew Griffin
An AI-generated caricature of OpenAI CEO Sam Altman, created using the company’s ChatGPT tool
An AI-generated caricature of OpenAI CEO Sam Altman, created using the company’s ChatGPT tool (Chat GPT)

They’re everywhere, these past few days: caricatures of people, lightly exaggerating their experience and their interests, created with artificial intelligence. They’re fun – and they are also a potentially scary reminder of just how much AI systems are learning about us.

Actually making one of the images is simple. Open up ChatGPT or any of the other popular chatbot systems, give it a close-up and clear picture of yourself, and give it a prompt along the lines of “create a caricature of me, using this image and everything else you know about me”. You can adjust the prompt, of course, to add in specific details about your job and lifestyle to subtly change the nature of the image that it generates. The chatbot will output an image, and you can save it and share it as you like.

The total number of people globally who’ve generated at least one caricature is almost certainly in the millions – judging by how viral it has become on major platforms and the activity spikes seen in usage reports. But if you haven’t “cartooned” yourself already, you might want to take a moment before you do so.

One of the most voiced objections to the trend is that it causes unnecessary stress on the environment. And – as with all artificial intelligence, and indeed anything that relies on computers – it is true that using ChatGPT to create a cartoon uses energy and water for cooling. Creating images is particularly intensive, so it uses a particularly high amount of energy and water.

But so does using AI at all. And while the question of how and whether it is possible to use AI ethically is an ongoing one that each person must answer for themselves, there is nothing especially damaging to the environment about taking part in the caricature trend.

But there are more specific concerns about these kinds of caricatures and the privacy issues. Making a cartoon of yourself requires you to hand over some data that is very useful to AI – a close-up image of your face and details about your life – which raises privacy concerns for a number of reasons.

The first reason is central to the way that AI works, since it runs on data. Large language models and similar AI systems are as good as the data they have – and so artificial intelligence companies are incentivised to collect as much of it as possible. In posting those photos, you are providing new data that might be used in ways you can’t imagine.

If one of those images finds its way into AI training data, it could be difficult to extract it, and besides, you might never actually know. But it will be there, and, theoretically, your own personal information could be used to create yet more images and text by someone you have never met and will never know.

The second reason that the caricature trend is of concern is both much older and much newer: ads. Collecting data to target advertising has been what powers the internet for decades.

South Korea’s PSY in front of a caricature of himself in 2022
South Korea’s PSY in front of a caricature of himself in 2022 (AP)

But those ads are only now starting to come to chatbots – OpenAI announced that ChatGPT will get them from this week – so it is perhaps a less obvious use case for the companies building such artificial intelligence systems. Once those ads are introduced, they become more effective and more valuable the better the systems know their users, and the more personal data they have, the more they will be able to do that.

You can see this in the pictures people share. In many cases, they will reflect aspects of their character – showing them doing their job, for instance, or enjoying their interests – because the chatbot has gathered that information from previous conversations the person has had with it.

This can be useful: it means that you don’t have to give the system context about yourself every time, for instance. And OpenAI has leaned into it, offering a 2025 wrap-up after last year that allowed people to see what kind of ChatGPT user they were, and what information they had shared. OpenAI points to these kinds of helpful uses in its privacy policy, explaining that the data is collected for a host of reasons, including to “communicate with you, including to respond to your questions”, as well as to “prevent fraud, illegal activity, or misuses” of its systems.

But the same data can be useful for less helpful purposes, too. That might include advertising. It also includes using it to “improve and develop” ChatGPT, in OpenAI’s case, by feeding the data back into the system. But worryingly, there is no way to know exactly where that will go in the future, as with so much about AI – even OpenAI can’t be sure how exactly ChatGPT will use your data, since it does so on its own.

As with so many online services, the best approach is probably about trust and moderation. Use the systems that you trust – but only trust them so far, being careful about what information you give them.

That is really a personal question: whether you are happy with your data being used to train the AI systems of the future, which might be used in both intimately personal and worryingly impersonal ways.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in