Published Date : 07/06/2025
The UK High Court has issued a stern warning to senior lawyers to take urgent action to prevent the misuse of artificial intelligence (AI) after several cases were marred by fake case-law citations. These citations were either completely fictitious or contained made-up passages, raising serious concerns about the integrity of legal proceedings.
Lawyers are increasingly using AI systems to assist in building legal arguments. However, two recent cases this year were significantly impacted by made-up case-law citations, which were either definitely or suspected to have been generated by AI.
In a £89 million damages case against the Qatar National Bank, the claimants made 45 case-law citations, 18 of which turned out to be fictitious, with quotes in many of the others also being bogus. The claimant admitted to using publicly available AI tools, and his solicitor acknowledged citing the sham authorities.
In another case, Haringey Law Centre challenged the London borough of Haringey over its alleged failure to provide temporary accommodation. The lawyer from the law centre cited phantom case law five times. Suspicions were raised when the solicitor defending the council had to repeatedly query why they could not find any trace of the supposed authorities.
This resulted in a legal action for wasted legal costs, and a court found the law centre and its lawyer, a pupil barrister, to be negligent. The barrister denied using AI in that case but admitted that she might have inadvertently done so while using Google or Safari in preparation for a separate case where she also cited phantom authorities. She suggested that she might have taken account of AI summaries without realizing what they were.
In a regulatory ruling responding to these cases, Dame Victoria Sharp, the president of the King’s Bench Division, emphasized the serious implications for the administration of justice and public confidence in the justice system if AI is misused. She warned that lawyers misusing AI could face sanctions, ranging from public admonishment to contempt of court proceedings and even referral to the police.
Dame Sharp called on the Bar Council and the Law Society to consider steps to curb the problem as a matter of urgency and urged heads of barristers’ chambers and managing partners of solicitors to ensure all lawyers understand their professional and ethical duties when using AI.
“Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,” she wrote. “The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”
Ian Jeffery, the chief executive of the Law Society of England and Wales, supported the ruling, stating that it “lays bare the dangers of using AI in legal work.”
“Artificial intelligence tools are increasingly used to support legal service delivery,” he added. “However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review, and ensure the accuracy of their work.”
These cases are not the first to be affected by AI-created hallucinations. In a UK tax tribunal in 2023, an appellant who claimed to have been helped by “a friend in a solicitor’s office” provided nine bogus historical tribunal decisions as supposed precedents. She admitted it was “possible” she had used ChatGPT but argued that it made no difference as there must be other cases that made her point.
In a €5.8 million (£4.9 million) Danish case this year, the appellants narrowly avoided contempt proceedings when they relied on a made-up ruling that the judge spotted. And in a 2023 case in the US district court for the southern district of New York, a lawyer faced significant challenges when asked to produce the seven apparently fictitious cases they had cited. The lawyer simply asked ChatGPT to summarize the cases, and the result, according to the judge, was “gibberish.” The judge fined the two lawyers and their firm $5,000.
The misuse of AI in legal work poses significant risks to the integrity of the legal system. Lawyers must remain vigilant and ensure the accuracy of their work to maintain public trust and the administration of justice.
Q: What is the main concern raised by the UK High Court regarding AI in legal work?
A: The main concern is the misuse of AI, which has led to the creation of fictitious or inaccurate case-law citations, undermining the integrity of legal proceedings and public trust in the justice system.
Q: What actions did the High Court suggest to address the misuse of AI?
A: The High Court called on the Bar Council and the Law Society to take urgent steps to curb the problem and urged heads of barristers’ chambers and managing partners of solicitors to ensure all lawyers understand their professional and ethical duties when using AI.
Q: What are the potential sanctions for lawyers who misuse AI?
A: Lawyers who misuse AI could face sanctions ranging from public admonishment to contempt of court proceedings and even referral to the police.
Q: Can AI tools be used in legal work, and if so, how should they be used?
A: AI tools can be used to support legal work, but lawyers must check, review, and ensure the accuracy of their work to avoid the risk of incorrect outputs and fabricated citations.
Q: What are some examples of cases affected by AI-generated hallucinations?
A: Examples include a £89 million damages case against the Qatar National Bank, a case involving Haringey Law Centre, a UK tax tribunal case, a €5.8 million Danish case, and a 2023 case in the US district court for the southern district of New York.