Supreme Court chief justice warns of dangers of AI after blunder involving ex-Trump fixer

Mr Roberts suggested there may come a time when conducting legal research without the help of AI is ‘unthinkable’

Graig Graziosi
Monday 01 January 2024 16:56 GMT
Related video: Professor behind world’s most complex twin separation surgery practiced using AI

Support truly
independent journalism

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Louise Thomas

Louise Thomas


Supreme Court Chief Justice John Roberts discussed AI and its possible impact on the judicial system in a year-end report published over the weekend.

Mr Roberts acknowledged that the emerging tech was likely to play an increased role in the work of attorneys and judges, but he did not expect to be fully replaced anytime soon.

“Legal determinations often involve gray areas that still require application of human judgment. I predict that human judges will be around for a while," Mr Roberts wrote, according to Reuters. "But with equal confidence I predict that judicial work – particularly at the trial level — will be significantly affected by AI."

He said there may come a time when legal research will be "unimaginable" without the assistance of AI, for example, and noted that the tech could provide access to legal resources to Americans who may otherwise have been unable to afford them.

“These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system,” he wrote.

The chief justice noted that the benefits of AI wouldn't come without some risks and drawbacks.

“AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously it risks invading privacy interests and dehumanizing the law,” he said.

In addition to those risks, popular LLM chatbots like ChatGPT and Google's Bard can produce false information — referred to as "hallucinations" rather than "mistakes" — which means users are rolling the dice anytime they take trust the bots without checking their work first.

Michael Cohen, Donald Trump's former lawyer and fixer, admitted that he had used an AI to look up court case records, which he then gave as a list of citations to his legal team. Unfortunately for Mr Cohen, the cases he cited didn't exist — they were an AI's dream.

Mr Roberts suggested it was "always a bad idea" to cite non-existent court cases in your legal filings.

Due to the potential pitfalls of AI reliance, Mr Roberts urged legal workers to exercise "caution and humility" when relying on the chatbots for their work.

The 5th US Circuit Court of Appeals in New Orleans seems to agree with Mr Roberts' view that AI-assisted lawyering may produce some less-than-satisfactory work. The court has proposed a rule that would require lawyers to either certify that that did not rely on AI software to draft briefs, or that a human fact-checked and edited any text generated by a chatbot.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies


Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in