AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (608 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Plumbers & Gasfitters Union v. Morris Plumbing E.D. Wisconsin (USA) 18 April 2024 Lawyer Implied 1 fake citation Warning
Grant v. City of Long Beach 9th Cir. CA (USA) 22 March 2024 Lawyer Unidentified
Fabricated Case Law (2)
Misrepresented Case Law (13)
Striking of Brief + Dismissal of Appeal

AI Use

The appellants’ lawyer submitted an opening brief riddled with hallucinated cases and mischaracterizations. The court did not directly investigate the technological origin but cited the systematic errors as consistent with known AI-generated hallucination patterns.

Hallucination Details

Two cited cases were completely nonexistent. Additionally, a dozen cited decisions were badly misrepresented, e.g., Hydrick v. Hunter and Wall v. County of Orange were cited for parent–child removal claims when they had nothing to do with such issues.

Ruling/Sanction

The Ninth Circuit struck the appellants' opening brief under Circuit Rule 28–1 and dismissed the appeal. The panel emphasized that fabricated citations and grotesque misrepresentations violate Rule 28(a)(8)(A) requirements for arguments with coherent citation support.

Michael Cohen Matter SDNY (USA) 20 March 2024 Pro Se Litigant Google Bard 3 fake cases No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied

AI Use

Michael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases.

Hallucination Details

Cohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility.

Ruling/Sanction

Judge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits.

Key Judicial Reasoning

The court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification.

Martin v. Taylor County N.D. Texas (USA) 6 March 2024 Pro Se Litigant Implied
False Quotes Case Law (1)
Misrepresented Legal Norm (9)
Warning

In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time."

Kruse v. Karlen Miss. CA (USA) 13 February 2024 Pro Se Litigant Unidentified At least twenty-two fabricated case citations and multiple statutory misstatements. Dismissal of Appeal + Damages Awarded for Frivolous Appeal. 10000 USD

AI Use

Appellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission.

Hallucination Details

Out of twenty-four total case citations in Karlen’s appellate brief:

  • Only two were genuine (and misused).
  • Twenty-two were completely fictitious.
  • Multiple Missouri statutes and procedural rules were cited incorrectly or completely misrepresented

Ruling/Sanction

The Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status.

Key Judicial Reasoning

The Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers.

Smith v. Farwell Massachusetts (USA) 12 February 2024 Lawyer Unidentified 3 fake cases Monetary Fine (Supervising Lawyer) 2000 USD

AI Use

In a wrongful death case, plaintiff's counsel filed four memoranda opposing motions to dismiss. The drafting was done by junior staff (an associate and two recent law school graduates not yet admitted to the bar) who used an unidentified AI system to locate supporting authorities. The supervising attorney signed the filings after reviewing them for style and grammar, but admittedly did not check the accuracy of the citations and was unaware AI had been used.

Hallucination Details

Judge Brian A. Davis noticed citations "seemed amiss" and, after investigation, could not locate three cases cited in the memoranda. These were fictitious federal and state case citations.

Ruling/Sanction

After being questioned, the supervising attorney promptly investigated, admitted the citations were fake and AI-generated, expressed sincere contrition, and explained his lack of familiarity with AI risks. Despite accepting the attorney's candor and lack of intent to mislead, Judge Davis imposed a $2,000 monetary sanction on the supervising counsel, payable to the court.

Key Judicial Reasoning

The court found that sanctions were warranted because counsel failed to take "basic, necessary precautions" (i.e., verifying citations) before filing. While the sanction was deemed "mild" due to the attorney's candor and unfamiliarity with AI (distinguishing it from Mata's bad faith finding), the court issued a strong warning that a defense based on ignorance "will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known". The case underscores the supervisory responsibilities of senior attorneys.

Park v. Kim 2nd. Cir. CA (USA) 30 January 2024 Lawyer ChatGPT
Fabricated Case Law (1)
Referral to Grievance Panel + Order to Disclose Misconduct to Client.

AI Use

Counsel admitted using ChatGPT to find supporting case law after failing to locate precedent manually. She cited a fictitious case (Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)) in the reply brief, never verifying its existence.

Hallucination Details

Only one hallucinated case was cited in the reply brief: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014). When asked to produce the case, Counsel admitted it did not exist, blaming reliance on ChatGPT.

Ruling/Sanction

The Court referred Counsel to the Second Circuit’s Grievance Panel for further investigation and possible discipline. Lee was ordered to furnish a copy of the decision (translated if necessary) to her client and to file certification of compliance.

Key Judicial Reasoning

The Court emphasized that attorneys must personally verify the existence and accuracy of all authorities cited. Rule 11 requires a reasonable inquiry, and no technological novelty excuses failing to meet that standard. The Second Circuit cited Mata v. Avianca approvingly, confirming that citing fake cases amounts to abusing the adversarial system.

Matter of Samuel NY Country Court (USA) 11 January 2024 Lawyer Unidentified
Fabricated Case Law (1)
Misrepresented Case Law (1), Legal Norm (7)
Striking of Filing + Sanctions Hearing Scheduled

AI Use

Osborne’s attorney, under time pressure, submitted reply papers heavily relying on a website or tool that used generative AI. The submission included fabricated judicial authorities presented without independent verification. No admission by the lawyer was recorded, but the court independently verified the error.

Hallucination Details

Of the six cases cited in the October 11, 2023 reply, five were found to be either fictional or materially erroneous. A basic Lexis search would have revealed the fabrications instantly. The court drew explicit comparisons to the Mata v. Avianca fiasco.

Ruling/Sanction

The court struck the offending reply papers from the record and ordered the attorney to appear for a sanctions hearing under New York’s Rule 130-1.1. Potential sanctions include financial penalties or other disciplinary measures.

Key Judicial Reasoning

The court emphasized that while the use of AI tools is not forbidden per se, attorneys must personally verify all outputs. The violation was deemed "frivolous conduct" because the lawyer falsely certified the validity of the filing. The judge stressed the dangers to the judicial system from fictional citations: wasting time, misleading parties, degrading trust in courts, and harming the profession’s reputation.