Published Date : 09/06/2025
The High Court of England and Wales has issued a significant warning to lawyers regarding the misuse of artificial intelligence (AI) in their work. In a recent ruling that ties together two separate cases, Judge Victoria Sharp emphasized that generative AI tools like ChatGPT are not capable of conducting reliable legal research.
Such tools can produce apparently coherent and plausible responses to prompts, but those responses may turn out to be entirely incorrect. Judge Sharp noted, “The responses may make confident assertions that are simply untrue.” This does not mean that lawyers cannot use AI in their research, but they have a professional duty to verify the accuracy of such research by referencing authoritative sources before using it in their professional work.
The ruling comes at a time when there is a growing number of cases where lawyers, including those representing major AI platforms in the U.S., have cited what appear to be AI-generated falsehoods. Judge Sharp stated that more needs to be done to ensure that the guidance is followed and that lawyers comply with their duties to the court. She added that her ruling will be forwarded to professional bodies, such as the Bar Council and the Law Society, to reinforce these obligations.
In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations. Out of these, 18 cases did not exist, while many others did not contain the quotations attributed to them, did not support the propositions for which they were cited, and had no relevance to the subject matter of the application. This highlights the severity of the issue and the potential for significant legal repercussions.
In the other case, a lawyer representing a man who had been evicted from his London home wrote a court filing citing five cases that did not appear to exist. The lawyer denied using AI but suggested that the citations may have come from AI-generated summaries that appeared in search engines like Google or Safari. Judge Sharp noted that while the court decided not to initiate contempt proceedings in this instance, it is not a precedent and that lawyers who do not meet their professional obligations risk severe sanctions.
Both lawyers involved in these cases were either referred or referred themselves to professional regulators. Judge Sharp stressed that when lawyers do not meet their duties to the court, the court’s powers range from public admonition to the imposition of costs, contempt proceedings, or even referral to the police. This ruling underscores the importance of maintaining the integrity of legal research and the potential consequences of failing to do so.
As AI continues to evolve and integrate into various professional fields, including the legal profession, it is crucial for practitioners to be vigilant and adhere to professional standards. The High Court’s warning serves as a reminder that while AI can be a valuable tool, it must be used responsibly and with due diligence to avoid legal and ethical pitfalls.
Q: What is the main concern regarding AI in legal research according to the High Court of England and Wales?
A: The main concern is that AI tools like ChatGPT can produce plausible but incorrect responses, leading to the citation of non-existent or irrelevant legal cases, which can undermine the integrity of legal proceedings.
Q: What steps should lawyers take to ensure the accuracy of AI-generated research?
A: Lawyers should verify the accuracy of AI-generated research by referencing authoritative sources before using it in their professional work to ensure compliance with their professional obligations.
Q: What are the potential consequences for lawyers who misuse AI in their legal research?
A: Lawyers who misuse AI in their legal research risk severe sanctions, including public admonition, imposition of costs, contempt proceedings, or even referral to the police.
Q: How does the High Court plan to address the issue of AI misuse in legal research?
A: The High Court plans to address the issue by forwarding the ruling to professional bodies such as the Bar Council and the Law Society to reinforce the guidance and ensure compliance.
Q: What are some examples of AI-generated falsehoods in legal research mentioned in the ruling?
A: Examples include citations to non-existent cases or cases that do not contain the quotations attributed to them, do not support the propositions for which they were cited, and have no relevance to the subject matter of the application.