AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (1156 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you have any questions about the database, a FAQ is available here.
And if you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Candice Dias v Angle Auto Finance Fair Work Commission (Australia) 20 January 2025 Pro Se Litigant Implied
Fabricated Case Law (3)
Misrepresented Case Law (1)
United States v. Hayes E.D. Cal. (USA) 17 January 2025 Federal Defender Unidentified One fake case citation with fabricated quotation Formal Sanction Imposed + Written Reprimand

AI Use

Defense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible.

Hallucination Details

Francisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere.

Ruling/Sanction

The Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted.

Key Judicial Reasoning

The court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record.

Source: Volokh
Strong v. Rushmore Loan Management Services D. Nebraska (USA) 15 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1)
Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions
Kohls v. Ellison Minnesota (USA) 10 January 2025 Expert GPT-4o Fake Academic Citations Expert Declaration Excluded

AI Use

Professor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections.

Hallucination Details

The declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations.

Ruling/Sanction

Professor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable.

Key Judicial Reasoning

The court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted.

Source: Volokh
O’Brien v. Flick and Chamberlain S.D. Florida (USA) 10 January 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations

AI Use

Although O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.”

Ruling/Sanction

The court dismissed the case with prejudice on dual grounds:

  • The claims should have been raised as compulsory counterclaims in prior pending litigation and were thus procedurally barred under Rule 13(a)
  • O’Brien submitted fake legal citations, failed to acknowledge the issue candidly, violated local rules, and engaged in a pattern of procedural misconduct in this and other related litigation. While monetary sanctions were not imposed, the court granted the motion to strike and ordered dismissal with prejudice as both substantive and disciplinary remedy.

Key Judicial Reasoning

Judge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers.

Mavundla v. MEC High Court (South Africa) 8 January 2025 Lawyer Implied
Fabricated Case Law (9)
Misrepresented Case Law (4), Legal Norm (2)
Leave for appel dismissed with costs; referral to Legal Practice Council

AI Use

The judgment does not explicitly confirm that generative AI was used, but the judge strongly suspects ChatGPT or a similar tool was the source. The judge even ran prompts into ChatGPT and confirmed that the tool responded with fabricated support for the same fake cases used in the submission. Counsel blamed overwork and delegation to a candidate attorney (Ms. Farouk), who denied AI use but gave vague and evasive answers.

Hallucination Details

Fabricated or misattributed cases included:

  • Pieterse v. The Public Protector (no such case exists at cited location)
  • Burgers v. The Executive Committee..., Dube v. Schleich, City of Cape Town v. Aon SA, Makro Properties v. Raal, Standard Bank v. Lethole — none found in SAFLII or major reporters
  • Citations were often invented or misattributed to irrelevant decisions (e.g., a Competition Tribunal merger approval cited as support for service rules)

The supplementary notice of appeal included misleading summaries with no accurate paragraph citations, and no proper authority was ever provided for key procedural points.

Ruling/Sanction

  • Application for leave to appeal dismissed in full
  • Legal representatives ordered to pay costs of the 22 and 25 September 2024 appearances de bonis propriis
  • Judgment referred to the Legal Practice Council
  • Judge emphasized that the conduct went beyond the leniency shown in Parker v. Forsyth, as it involved unverified submissions in a signed court filing and then doubling down during oral argument.

Key Judicial Reasoning

Justice Bezuidenhout issued a lengthy and stern warning on the professional obligation to verify authorities. She held that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional,” and emphasized that even ignorance of AI’s flaws does not excuse unethical conduct. The judgment discusses comparative standards, ethical obligations, and recent literature in detail.