Inside the Chinese AI labs and how our chatbot secrets could be weaponised
Few had heard of DeepSeek before this week – then the Chinese AI startup wiped billions off the stock market and sent shockwaves through Silicon Valley. With an AI arms race well and truly underway, Io Dodds finds out what is really worrying the experts in San Francisco.
It’s not often that nearly $600bn (£483bn) gets wiped off the share value of a corporation in a single day. In fact, it has never happened before. Such was the dubious honour of Nvidia, an AI-focused chipmaker and the world’s most valuable company, when it became the biggest casualty of a $1 trillion stock market wipeout on Monday. The reason? A breakout chatbot app from a Chinese AI startup called DeepSeek, that claimed, with its new “R1” model, to have rivalled the performance of the most advanced American models at a tiny fraction of the cost.
What’s more, R1’s “weights” – the constellation of statistical connections that defines its understanding of the world – were published with an open-access licence, meaning anyone with the requisite hardware can run their own version.
Coming on the tail of several further advances by DeepSeek and other Chinese companies, including TikTok’s owner ByteDance, it set off a feverish reaction among some US tech leaders. “Deepseek R1 is AI’s Sputnik moment,” said venture capitalist Marc Andreessen. “[We’re] in panic mode,” said one anonymous AI worker at Meta, which owns Facebook and Instagram.
“[Chinese] companies have consistently been behind the US state of the art [in the AI field] by maybe six months to a year,” Noah Jacobson, a corporate AI researcher in San Francisco who previously worked for Amazon, explains. “I feel like DeepSeek is the first time that China has produced a model that appears to be on par.”
That is worrying news for US incumbents such as OpenAI, the maker of ChatGPT, which have bet heavily on the sheer expense of making and running AI models, keeping their competitors at bay.
But for the rest of us, it raises privacy and cyber-espionage fears akin to, or perhaps worse than, those around TikTok. It has also accelerated the ongoing tech war between China and the US – a high-tech arms race that could heighten all the dangers that AI critics and doomsayers have been warning about.
“We really don’t know how to get these machines to reliably follow our instructions,” says Gary Marcus, a neuroscience professor and AI entrepreneur who has become a prominent sceptic of the industry’s more grandiose promises.
“And so that does pose a bunch of risks, especially as we hook these [systems] up to more and more things... OpenAI’s tools may [soon] power military decisions and that’s just asking for trouble.”
‘AI proliferation is now guaranteed’
While many people will have only heard of this new AI app this week, industry insiders have been watching DeepSeek with interest for some time. In fact, many AI experts I talked to for this article have expressed surprise at the size of this week’s drama even as they acknowledged the company’s achievement.
Founded in 2023 by hedge fund manager Liang Wenfeng, DeepSeek hopes – like OpenAI and its rival Anthropic – to eventually build an AI capable of matching or surpassing humans at any task, known as artificial general intelligence (AGI).

DeepSeek’s big innovation has been figuring out clever engineering hacks to achieve high-end results on second-rate hardware. The most coveted AI chips are made by Nvidia, but the 2022 CHIPS and Science Act – a US law designed to fortify the country’s lead in AI – forbids their sale to China.
Yet these export controls seem only to have spurred DeepSeek into getting more juice from lesser chips, rather than actually impeding it (though some US tech bosses accuse it of secretly using smuggled top-line silicon). However, the real shock, according to AI-focused venture capitalist Nick Davidov, was when DeepSeek’s app overtook ChatGPT and shot to the top of the iPhone's App Store chart. It showed that a little-known foreign company can not only innovate technologically but actually break through to American consumers.
Just like TikTok, however, DeepSeek is required to hand over users’ data to Chinese security services upon request. The app tracks your every keystroke and stores it on Chinese servers, and was recently found to have left users’ data exposed to the open internet.
That matters because the details we share with AI assistants can be incredibly personal. We use them as therapists, doctors, proofreaders, executive assistants, work buddies, friends, or even lovers.
“It’s a huge concern,” Davidov says. “TikTok gets stuff that is valuable, but it’s not nearly as valuable as what people write to their chatbots. That’s much worse. [Software] engineers will upload all their source code into these apps just to try to fish out a bug more effectively.” If that coder works for, say a power plant, or another piece of critical infrastructure, then that would give a hostile government insight into how to hack it in future.

It helps that R1 can easily be copied and hosted by companies in other countries that don’t have a relationship with China. But, the overall question of safety becomes even more pressing as DeepSeek’s advances prove that less sophisticated models can be tuned up into more advanced “reasoning” models with the right data. Because R1 is open-weight, its methods can be copied by anyone, unchecked and unregulated in any way.
“DeepSeek means AI proliferation is guaranteed,” wrote Jack Clark, co-founder of the OpenAI rival Anthropic, on Monday. “AI capabilities worldwide just took a one-way ratchet forward.”
In the short term, Marcus argues that this proliferation will make it even harder to trust what you read or watch online. As the price of AI-generated deepfakes and social media bots crashes towards zero, the operating costs of cybercriminals, propagandists, and for-profit bulls*** merchants also decrease.
On a broader level, many experts are deeply sceptical that AI will attain superhuman intelligence and independence any time soon. But an arms race between two powerful nations with authoritarian leaders could certainly make them more likely. The same is true of problems that are already here, such as biased, unaccountable algorithms or alleged mass plagiarism.
Indeed, many Silicon Valley AI bulls – including “accelerationists” who want to unleash AI to conquer the stars in humanity’s name – see DeepSeek’s success as a sign that the US must step back altogether from regulation and safety concerns in order to win the tech war.

“The US has unnecessarily shot itself in the foot with an excessive focus on AI safety,” Dmitry Shevelenko, chief business officer of Perplexity, says. “The thing that matters is having the best AI, and having the best people, and prioritising that above all else.”
Shevelenko also cites the constraints placed by many US chatbot makers on discussions about sensitive topics. “If you’re prioritising an AI that won’t offend anyone, you’re actually inherently limiting its capabilities,” he says. “This is a moment to focus on absolute performance.”
Winners and Losers
For now, the biggest losers are America’s AI incumbents: OpenAI, Anthropic, Microsoft, and, to a lesser extent, Google and Meta, which over the past several years have gravitated towards an orthodoxy that today looks in doubt.
The idea, says Jacobson, was that the AI race would be won through sheer scale: by training increasingly powerful (and closed-weight) models on ever-increasing stockpiles of specialist hardware. In theory, the eye-watering cost of this expansion would form what Silicon Valley calls a “moat”: the thing that stops another company from simply copying your ideas.
“In the early days, people thought that there would be one dominant foundation model company with significant capabilities lead over its competition; many suspected this could be OpenAI,” says Jacobson. “For the big companies, a winner-takes-all scenario could be life-threatening, whereas investing tens of millions would be comparably insignificant.”
This year alone, Meta plans to spend at least $60bn (£48.3bn) on AI infrastructure, while Microsoft plans $80bn (£64bn). The White House just announced a $500bn (£402bn) joint venture between OpenAI, Oracle, and Japanese mega-investor SoftBank. In the US, the resulting demand for electricity is destabilising the power supply to ordinary homes, swelling tech giants’ emissions, and spurring the development of new nuclear power plants.

But some experts were already sceptical of this gold rush. Critics such as Marcus have long questioned whether OpenAI is really the best at advancing AI science, as opposed to merely the best at raising money.
“The dirty secret is that for a lot of us [longtime AI engineers and scientists], when we look at OpenAI, we feel that it’s really, really lazy engineering,” says Gur Kimchi, a veteran AI and robotics entrepreneur who once ran Amazon’s drone delivery programme.
DeepSeek has shown everyone how to do nearly as much with far less. “I think the world gave too much power on promises to companies like OpenAI and individuals like Sam Altman. They haven’t really delivered,” Marcus says, adding: “DeepSeek kind of ripped the bandage off, and made it clear that maybe the emperor doesn’t have all the clothes that people have been attributing to the emperor.”
That doesn’t make scale irrelevant: as Davidov points out, decreasing the price of a service can radically increase the demand for it by widening the pool of people able to access it (a dynamic known to economists as the Jevons paradox). And if you can run AI more efficiently on a second-rate chip, wouldn’t it be even better to do it on a first-rate chip?
What it does do, Davidov and Shevelenko both agree, is hasten the “commoditization” of AI and a world in which synthetic workers are available to almost anyone for pennies is a radically different one to our own. Some have argued that it would make much of the global workforce permanently obsolete, whereas Davidov believes in the long term it will simply free people up to do new and different jobs. “Everyone is gonna have their personal coach, their personal psychotherapist, their personal family doctor,” he says.
That is as long as duelling Chinese and American AIs don’t accidentally take us back to the Stone Age.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments