AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (52 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you have any questions about the database, a FAQ is available here.
And if you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

Click to Download CSV
Last updated: 16 April 2026
State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Harber v. HMRC (UK) 4 December 2023 Pro Se Litigant Unidentified 9 Fake Tribunal Decisions No Sanction on Litigant; Warning implied for lawyers.

AI Use

Catherine Harber, a self-represented taxpayer appealing an HMRC penalty, submitted a document citing nine purported First-Tier Tribunal decisions supporting her position regarding "reasonable excuse". She stated the cases were provided by "a friend in a solicitor's office" and acknowledged they might have been generated by AI. ChatGPT was mentioned as a likely source.

Hallucination Details

The nine cited FTT decisions (names, dates, summaries provided) were found to be non-existent after checks by the Tribunal and HMRC. While plausible, the fake summaries contained anomalies like American spellings and repeated phrases. Some cited cases resembled real ones, but those real cases actually went against the appellant.

Ruling/Sanction

The Tribunal factually determined the cited cases were AI-generated hallucinations. It accepted Mrs. Harber was unaware they were fake and did not know how to verify them. Her appeal failed on its merits, unrelated to the AI issue. No sanctions were imposed on the litigant.

Key Judicial Reasoning

The Tribunal emphasized that submitting invented judgments was not harmless, citing the waste of public resources (time and money for the Tribunal and HMRC). It explicitly endorsed the concerns raised in the US Mata decision regarding the various harms flowing from fake opinions. While lenient towards the self-represented litigant, the ruling implicitly warned that lawyers would likely face stricter consequences. This was the first reported UK decision finding AI-generated fake cases cited by a litigant

Unknown case Manchester (UK) 29 May 2023 Pro Se Litigant Implied Fabricated citations, misrepresented precedents

Unclear if any formal decision on the matter, the incident was reported in the Law Society Gazette in May 2023 (source).