This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (18 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details |
|---|---|---|---|---|---|---|---|---|
| Re Sriram (aka Roy) | High Court (UK) | 22 October 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Warning | — | |
| AK v Secretary of State for the Home Department | Upper Tribunal (UK) | 6 October 2025 | Lawyer | ChatGPT |
Fabricated
Case Law
(2)
|
Show Cause Order | — | |
|
The grounds of appeal contained at least two non-existent authorities. The judge concluded the false citations likely arose from unchecked generative-AI drafting and directed the solicitor to show cause why conduct should not be referred to the SRA. |
||||||||
| ANPV & SAPV v Secretary of State for the Home Department | Upper Tribunal (UK) | 29 September 2025 | Lawyer | Microsoft Copilot |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(4)
|
Show Cause Order | — | |
| Kuzniar v General Dental Council | Employment Tribunal (UK) | 15 August 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
Tribunal declined to award costs | — | |
| Source: Natural & Artificial Intelligence in Law | ||||||||
| MS (Bangladesh) | Immigration and Asylum Chamber (UK) | 12 August 2025 | Lawyer | ChatGPT |
Fabricated
Case Law
(1),
Exhibits or Submissions
(1)
|
Referred to the Bar Standards Board for investigation | — | |
|
Underlying judgment (in which the hallucination had been observed) is here. |
||||||||
| The Father v The Mother | High Court (UK) | 30 July 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Costs awarded against the Father (partly for relying on faked cases). | 5900 GBP | |
|
The Father, acting pro se, relied in written submissions on numerous authorities (particularly about ASD) that HHJ Bailey identified as not genuine and apparently AI-generated; the court treated this as part of poor litigation conduct and awarded costs against him. |
||||||||
| Source: Natural & Artificial Intelligence in Law | ||||||||
| HMRC v. Gunnarson | Upper Tribunal (Tax and Chancery Chamber) (UK) | 23 July 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(3)
Misrepresented
Legal Norm
(1)
|
Warning | — | |
| Pro Health Solutions Ltd v ProHealth Inc | Intellectual Property Office (UK) | 20 June 2025 | Pro Se Litigant, Lawyer | ChatGPT |
False Quotes
Case Law
(3)
Misrepresented
Case Law
(6),
Doctrinal Work
(2)
|
Warning; No costs awarded for the appeal since both sides seemingly erred | — | |
|
Claimant used Chat GPT to assist in drafting his grounds of appeal and skeleton argument. The documents included fabricated citations and misrepresented case summaries. Claimant admitted to using Chat GPT and apologized for the errors. Compounding matters, the court suspected that the respondent had also used AI, since the cases cited in the Counsel's skeleton, though extant, did not support any of the propositions made - and Counsel was unable to explain how they got there. |
||||||||
| Ayinde v. Haringey & Al-Haroun v. QNB | High Court (UK) | 6 June 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(8)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1),
Legal Norm
(1)
|
No contempt, but referral to professional bodies | — | |
|
This judgment, delivered on 6 June 2025 by the Divisional Court of the King's Bench Division, addresses two cases referred under the court's Hamid jurisdiction, which concerns the court's power to enforce duties lawyers owe to the court. Both cases involve lawyers submitting written arguments or evidence containing false information, specifically non-existent case citations, generated through the use of artificial intelligence without proper verification. The Court used this opportunity to issue broader guidance on the use of AI in legal practice, raising concerns about the competence, training, and supervision of lawyers. |
||||||||
| Bandla v. Solicitors Regulation Authority | UK (UK) | 13 May 2025 | Pro Se Litigant | Google Search (Allegedly) |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1),
Legal Norm
(2)
|
Application for extension of time refused; appeal struck out as abuse of process; indemnity costs of £24,727.20 ordered; permission to appeal denied | 24727 GBP | |
AI UseBandla denied using AI, claiming instead to have relied on Google searches to locate “supportive” case law. He admitted that he did not verify any of the citations and never checked them against official sources. The court found this unacceptable, particularly from someone formerly admitted as a solicitor. Hallucination DetailsBandla’s submissions cited at least 27 cases which the Solicitors Regulation Authority (SRA) could not locate. Bandla maintained summaries and quotations from these cases in formal submissions. When pressed in court, he admitted having never read the judgments, let alone verified their existence. Ruling/SanctionThe High Court refused the application for an extension of time, finding Bandla’s explanations inconsistent and unreliable. The court independently struck out the appeal on grounds of abuse of process due to the submission of fake authority. It imposed indemnity costs of £24,727.20. The judge emphasized that even after being alerted to the fictitious nature of the cases, Bandla neither withdrew nor corrected them. Key Judicial ReasoningThe court found Bandla’s conduct deeply troubling, noting his previous experience as a solicitor and his professed commitment to legal standards. It held that the deliberate or grossly negligent inclusion of fake case law—especially in an attempt to challenge a disciplinary disbarment—was an abuse requiring strong institutional response. |
||||||||
| Source: Natural & Artificial Intelligence in Law | ||||||||
| Crypto Open Patent Alliance v. Wright (2) | UK (UK) | 12 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(4),
Legal Norm
(2),
other
(2)
|
General Civil Restraint Order (GCRO) granted for 3 years; Case referred to Attorney General; Costs awarded to applicants. | 100000 GBP | |
AI UseDr. Wright, after beginning to represent himself, repeatedly used AI engines (such as ChatGPT or similar) to generate legal documents. These documents were characterized by the court as "highly verbose and repetitious" and full of "legal nonsense". This use of AI contributed to filings containing numerous false references to authority and misrepresentations of existing law. Hallucination DetailsWhile the core issue in Dr. Wright's litigation was his fundamental dishonesty (claiming to be Satoshi Nakamoto based on "lies and ... elaborately forged documents" ), the use of AI introduced specific problems. His appeal documents, bearing signs of AI creation, contained "numerous false references to authority". His later submissions also involved "citation of non-existent authorities". This AI-driven production of flawed legal arguments formed part of his broader pattern of disrespect for court rules and process. Ruling/SanctionMr Justice Mellor granted a General Civil Restraint Order (GCRO) against Dr. Wright for a three-year period. He found that an Extended CRO (ECRO) would be insufficient given the scope and persistence of Dr. Wright's abusive litigation. The court also referred Dr. Wright's conduct to the Attorney General for consideration of a civil proceedings order under s.42 of the Senior Courts Act 1981. Dr. Wright was ordered to pay the applicants' costs for the CRO application, summarily assessed at £100,000. Key Judicial ReasoningThe court found "overwhelming" evidence that Dr. Wright had persistently brought claims that were Totally Without Merit (TWM), numbering far more than the required threshold. This conduct involved extensive lies and forgeries across multiple jurisdictions and targeted individuals who often lacked the resources to defend themselves. The judge concluded there was a "very significant risk" that Dr. Wright would continue this abusive conduct unless restrained. The court noted his consistent contempt for court rules and processes, including his perjury, forgery, breach of orders, and flawed submissions (including those using AI). A GCRO was deemed just and proportionate to protect both potential future litigants and the finite resources of the court system |
||||||||
| Ayinde v. Borough of Haringey | High Court (UK) | 3 April 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(5)
Misrepresented
Legal Norm
(1)
|
Wasted costs order; Partial disallowance of Claimant’s costs; Order to send transcript to Bar Standards Board and Solicitors Regulation Authority | 11000 GBP | |
AI UseThe judgment states that the only other explanation for the fabricated cases was the use of artificial intelligence. Hallucination DetailsThe following five nonexistent cases were cited:
Ruling/SanctionThe court imposed wasted costs orders against both barrister and solicitor, reduced the claimant’s recoverable costs, and ordered the judgment to be provided to the BSB and SRA. |
||||||||
| Zzaman v. HMRC | (UK) | 3 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(7),
Legal Norm
(2)
|
Warning | — | |
|
Plaintiff had disclosed the use of AI in preparing his statement of case. The court noted: "29. However, our conclusion was that Mr Zzaman’s statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman’s statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC)). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully “understand” what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal). There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court’s time and that of opposing parties." |
||||||||
| Olsen v Finansiel Stabilitet | High Court (UK) | 25 January 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Exhibits or Submissions
(2),
Legal Norm
(2)
|
No contempt, but might bear out on costs | — | |
| Source: Natural & Artificial Intelligence in Law | ||||||||
| Crypto Open Patent Alliance v. Wright (1) | High Court (UK) | 6 December 2024 | Pro Se Litigant | Unknown |
Fabricated
Case Law
(1),
Exhibits or Submissions
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(1)
|
No formal sanction; fabricated citations disregarded | — | |
AI UseDr. Wright, representing himself, submitted numerous case citations in support of an application for remote attendance at an upcoming contempt hearing. COPA demonstrated that most of the authorities cited did not contain the quoted language—or were entirely unrelated. The judge agreed, noting these were likely "AI hallucinations by ChatGPT." Later on, the Court of Appeal declined permission to appeal (finding that "Dr Wright’s grounds of appeal, skeleton argument and summary of skeleton argument themselves contain multiple falsehoods, including reliance upon fictitious authorities such as “Anderson v the Queen [2013] UKPC 2” which appear to be AI-generated hallucinations"). This led the Court to order him to pay costs of 100,000 GBP. |
||||||||
| Mr D Rollo v. Marstons Trading Ltd | Employment Tribunal (UK) | 1 August 2024 | Pro Se Litigant | ChatGPT |
Misrepresented
Legal Norm
(1)
|
Claim dismissed; AI material excluded from evidence under prior judicial order; no sanction but explicit judicial criticism | — | |
AI UseThe claimant sought to rely on a conversation with ChatGPT to show that the respondent’s claims about the difficulty of retrieving archived data were false. Ruling/SanctionNo formal sanction was imposed, but the judgment made clear that ChatGPT outputs are not acceptable as evidence. Key Judicial ReasoningThe Tribunal held that "a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable". |
||||||||
| Harber v. HMRC | (UK) | 4 December 2023 | Pro Se Litigant | Unidentified | 9 Fake Tribunal Decisions | No Sanction on Litigant; Warning implied for lawyers. | — | |
AI UseCatherine Harber, a self-represented taxpayer appealing an HMRC penalty, submitted a document citing nine purported First-Tier Tribunal decisions supporting her position regarding "reasonable excuse". She stated the cases were provided by "a friend in a solicitor's office" and acknowledged they might have been generated by AI. ChatGPT was mentioned as a likely source. Hallucination DetailsThe nine cited FTT decisions (names, dates, summaries provided) were found to be non-existent after checks by the Tribunal and HMRC. While plausible, the fake summaries contained anomalies like American spellings and repeated phrases. Some cited cases resembled real ones, but those real cases actually went against the appellant. Ruling/SanctionThe Tribunal factually determined the cited cases were AI-generated hallucinations. It accepted Mrs. Harber was unaware they were fake and did not know how to verify them. Her appeal failed on its merits, unrelated to the AI issue. No sanctions were imposed on the litigant. Key Judicial ReasoningThe Tribunal emphasized that submitting invented judgments was not harmless, citing the waste of public resources (time and money for the Tribunal and HMRC). It explicitly endorsed the concerns raised in the US Mata decision regarding the various harms flowing from fake opinions. While lenient towards the self-represented litigant, the ruling implicitly warned that lawyers would likely face stricter consequences. This was the first reported UK decision finding AI-generated fake cases cited by a litigant |
||||||||
| Unknown case | Manchester (UK) | 29 May 2023 | Pro Se Litigant | Implied | Fabricated citations, misrepresented precedents | — | ||
|
Unclear if any formal decision on the matter, the incident was reported in the Law Society Gazette in May 2023 (source). |
||||||||