UK High Court issues warning as lawyers increasingly use fake AI case citations
- Marijan Hassan - Tech Journalist
- Jun 11
- 2 min read
The UK’s High Court has issued an urgent warning to the legal profession after a string of high-profile cases exposed the misuse of artificial intelligence in legal research, including the submission of completely fictitious case citations.

Senior judge Dame Victoria Sharp, president of the King’s Bench Division, said the misuse of AI tools to generate legal arguments posed “serious implications for the administration of justice and public confidence in the justice system.” In a regulatory filing on Friday, she called on legal bodies to take immediate action to prevent further abuse. The judge warned that lawyers misusing AI could face severe sanctions, including contempt of court or even criminal charges.
The warning follows at least two recent cases where AI-generated legal citations were presented in court. In a major £89 million damages claim against Qatar National Bank, the claimant cited 45 legal precedents, 18 of which were entirely fictitious. Many of the remaining citations contained invented passages. The solicitor admitted to using publicly available AI tools.
In a separate case, Haringey Law Centre brought a housing challenge against the London Borough of Haringey and cited five phantom cases. The council’s solicitor flagged concerns when they were unable to locate any of the cited authorities.
A judge later found the law centre and its pupil barrister negligent, opening the door to a claim for wasted costs. The barrister denied intentionally using AI but admitted she may have inadvertently done so via search engines or AI summaries.
“This isn’t just sloppy research. It’s an existential risk to the credibility of legal proceedings,” said Ian Jeffery, chief executive of the Law Society of England and Wales. “AI can assist with efficiency, but the outputs must always be rigorously verified.”
Dame Victoria Sharp emphasized that generative AI tools can produce fluent but inaccurate responses that may cite non-existent rulings or misquote real ones. She called on the Bar Council and the Law Society to address the issue as a matter of urgency and urged chambers and firms to educate staff on the ethical and professional duties related to AI use.
These are not isolated incidents. In 2023, a UK tax tribunal heard from a litigant who cited nine non-existent rulings allegedly found by “a friend in a solicitor’s office.” She later admitted it was “possible” she used ChatGPT.
Meanwhile, a €5.8m Danish court case narrowly avoided contempt proceedings after AI-generated fake rulings were exposed by the presiding judge. In a high-profile U.S. case last year, two lawyers were fined $5,000 for relying on fabricated citations generated and “summarised” by ChatGPT.