AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (721 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you have any questions about the database, a FAQ is available here.
And if you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Baptiste v. Baez Cal. CA (USA) 18 July 2025 Pro Se Litigant Implied Fabricated citation Warning
Rescore Hollywood, LLC v. Samules Cal. (USA) 18 July 2025 Pro Se Litigant Implied Fabricated citations Arguments ignored
Hatfield v. Ornelas (USA) 16 July 2025 Pro Se Litigant Unidentified Fabricated citation(s) Order to Show Cause
Augustin v. Formula 3 Brooklyn Inc. Supreme Court, Kings County, NY (USA) 16 July 2025 Pro Se Litigant Unidentified
Fabricated Case Law (3)
Misrepresented Case Law (2)
Defendants ordered to comply with AI rules; potential financial sanctions pending hearing.
ByoPlanet International v. Johansson and Gilstrap S.D. Florida (USA) 15 July 2025 Lawyer ChatGPT
Fabricated Case Law (9)
False Quotes Case Law (5)
Misrepresented Case Law (1)
Cases dismissed without prejudice, attorney ordered to pay defendants' attorney fees, referred to Florida Bar. 85567 USD

In May, the court asked Counsel to show cause why they should not be sanctioned for filing briefs with hallucinations - especially since they continued filing hallucinated submissions after being warned about it.

In their Answer, Counsel revealed that "specific citations and quotes in question were inadvertently derived from internal draft text prepared using generative AI research tools designed to expedite legal research and brief drafting".

In the Order, the court noted that Counsel "was not candid to the Court when confronted about his use of AI, stating that some of these documents were “prepared under time constraints,” when he had nearly two more weeks before the deadline to submit his responses." The judge was also unimpressed by Counsel's attempt to shift the blame to a paralegal.

Finally, in the fee dispute order, the court cited this database to point out that its approach to award costs and fees was appropriate, especially given the egregiousness of the claimant's conduct in this case. The judge also took into account "the significant, if indirect, monetary losses that may arise from nonmonetary sanctions in other cases, such as loss of business and loss of reputation, or the monetary loss borne by a client when a motion or even an entire case is adversely decided due to counsel’s misuse of AI."

Woodrow Jackson v. Auto-Owners Insurance Company M.D. Georgia (USA) 14 July 2025 Lawyer Unidentified
Fabricated Case Law (9)
Monetary sanction; CLE requirement; Adverse Costs Order 1000 USD

Plaintiff's Counsel cited nine non-existent cases in a response to a motion to dismiss, which were generated using AI software. The court found this to be a violation of Rule 11, as the citations were not checked for accuracy. Counsel admitted the error, apologized, and explained the circumstances, including staff transitions and the use of AI. The court imposed a $1000 sanction, required Mr. Braddy to attend a CLE course on AI ethics, and ordered reimbursement of Defendant's attorney fees and costs.

Kessler v. City of Atwater E.D. California (USA) 11 July 2025 Lawyer Unidentified
Fabricated Case Law (6)
False Quotes Case Law (4)
Misrepresented Case Law (8)
Order to show cause issued for potential sanctions
In re Marriage of Haibt Colorado CA (USA) 10 July 2025 Pro Se Litigant Implied Fabricated citations Warning
Gurpreet Kaur v. Captain Joel Desso N.D. New York (USA) 9 July 2025 Lawyer Claude Sonnet 4
False Quotes Case Law (4)
Monetary and professional sanctions 1000 USD

Counsel confessed having Claude Sonnet 4 to draft a legal submission, which included fabricated quotations from legal authorities. Counsel said he was pressed by time.

After holding that there is "no reason to distinguish between the submission of fabricated cases and the submission of fabricated quotations from real cases. In both postures, the attorney seeks to persuade the Court using legal authority that does not exist", the court held that Mr. Desmarais had violated Rule 11 of the Federal Rules of Civil Procedure by failing to verify the accuracy of the AI-generated content. Counsel was found to have acted in subjective bad faith, as he was aware of the potential for AI to hallucinate legal citations and failed to take corrective action even after the government pointed out the errors.

The court imposed a $1,000 monetary sanction and required Counsel to complete a CLE course on the ethical use of AI in legal practice and notify his client of the issue.

Foster Chambers v. Village of Oak Park 7the Circuit CA (USA) 9 July 2025 Pro Se Litigant Implied Fabricated citations, false quotes, misrepresented precedents Order to show cause
in re: Turner Iowa Attorney Disciplinary Board (USA) 9 July 2025 Lawyer Implied
Fabricated Case Law (1)
Motion stricken
Coomer v. Lindell/MyPillow, Inc. (1) D. Colorado (USA) 7 July 2025 Lawyer Co-Pilot, Westlaw’s AI, Gemini, Grok, Claude, ChatGPT, Perplexity
Fabricated Case Law (4)
False Quotes Case Law (2)
Misrepresented Case Law (5)
Monetary Sanctions 6000 USD

Prior Order to Show Cause available here.

After reviewing - and dismissing - the factual allegations made by Counsel, and noting that they had submitted errata in parallel cases (dealing with other fabricated citations), the court swiftly concluded that they "have violated Rule 11 because they were not reasonable in certifying that the claims, defenses, and other legal contentions contained in Defendants’ Opposition to Motion in Limine [Doc. 283] were warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law."

Both counsel were sanctioned with a 3,000 USD fine, payable to the court.

Smith v. Gamble Ohio CA (USA) 7 July 2025 Pro Se Litigant Implied Fabricated citation(s) Sanctions granted against Father (TBD) 1 USD
Source: Robert Freund
Wright Brothers Aero, Inc. B-423326.2 GAO (USA) 7 July 2025 Pro Se Litigant Unidentified
Fabricated Case Law (1)
Warning
In re Eugene Ezra Perkins D. Oregon (Bankruptcy) (USA) 7 July 2025 Pro Se Litigant Unidentified
Fabricated Case Law (2)
Misrepresented Case Law (2)
Case dismissed; motion to alter or amend denied
Sharita Hill v. State of Oklahoma W.D. Oklahoma (USA) 3 July 2025 Lawyer Implied
Fabricated Case Law (1)
False Quotes Case Law (1)
Misrepresented Case Law (1)
Warning

"Further, these inaccuracies signal that Plaintiff's counsel may have used AI to assist in the drafting of Plaintiff's Response (or otherwise counsel produced exceptionally sloppy work). In this regard, this Court's Chambers Rules include “Disclosure and Certification Requirements” for use of “Generative Artificial Intelligence” and expressly provide that an attorney or party must disclose in any document to be filed with the Court “that AI was used and the specific AI tool that was used” and to “certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.” See id.6 No such disclosure and certification has been made in this case. The Court's Rules further provide that an attorney will be responsible for the contents of any documents prepared with generative AI, in accordance with Rule 11 of the Federal Rules of Civil Procedure, and that the failure to make the disclosure and certification “may result in the imposition of sanctions.”"

Matter of Sewell Properties Trust Colorado Court of Appeals (USA) 3 July 2025 Pro Se Litigant Implied
Misrepresented Case Law (2), Legal Norm (2)
Warning

The court noted that:

"both Lehr-Guthrie's and McDonald's briefs are replete with errors in their citations to case authority, such as repeated citation errors, references to nonexistent quotes, and incorrect statements about the cases (for instance, as noted above, the two cases McDonald cited for a proposition relating to the duty of impartiality don't even reference that duty). This suggests to us that the briefs may have been drafted with the use of generative artificial intelligence (GAI). “[U]sing a GAI tool to draft a legal document can pose serious risks if the user does not thoroughly review the tool's output.” Al-Hamim v. Star Hearthstone, LLC, 2024 COA 128, ¶ 32. Self-represented litigants must be particularly careful, as they “may not understand that a GAI tool may confidently respond to a query regarding a legal topic ‘even if the answer contains errors, hallucinations, falsehoods, or biases.’ ” Id. (citation omitted).4 We advise the parties that errors caused by GAI in future filings may result in sanctions. See id. at ¶ 41. "

The court warned that future errors caused by AI could result in sanctions.

Tyrone Walker v. Juliane Pierre Massachusetts CA (USA) 3 July 2025 Lawyer Implied Fabricated citations Struck from the record

"In a prior order, we struck portions of Walker's brief that included citations to nonexistent cases. We note that the arguments raised would not have changed the outcome of the appeal in any event. "

Muhammad v. Gap Inc. S.D. Ohio (USA) 3 July 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (3)
False Quotes Case Law (4)
OSC

"Compounding the problem, what generative AI lacks in precision, it more than makes up for in speed. Litigants who simply file the material that AI tools generate, without carefully reviewing it first for accuracy, have the potential to swamp courts with what appear at first glance to be legal arguments built on law and precedent, but which are in fact nothing of the sort. And not only are these problems in their own right, but they also heighten the two concerns the Court highlighted above—that defendants will be forced to spend more time and incur more costs parsing through copious baseless filings to defend an action, and that Courts will waste precious time doing the same in ruling on motions and moving matters along."

(Plaintiff acknowledged use of ChatGPT in a subsequent filing)

Plaintiff was eventually designated as vexatious litigant and his case dismissed with prejudice.

Source: Jesse Schaefer
Angela and Theodore Chagnon v. Holly Nelson Chancery Court of Wyoming (USA) 2 July 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Legal Norm (1)
Order to show cause issued; potential striking of motion

Defendant Holly Nelson, appearing pro se, filed a motion to dismiss that included a fabricated case citation, Finch v. Smith, which does not exist. The court inferred that Nelson used AI to draft the motion without verifying the accuracy of the citations. The court issued an order to show cause, requiring Nelson to justify why her filing does not violate Rule 11, or alternatively, to withdraw her motion. If she fails to do so, the court intends to strike her motion entirely.

Source: Robert Freund
Doe v. Noem D.C. DC (USA) 1 July 2025 Lawyer ChatGPT One fabricated authority Order to Show Cause

Fake citation, in this brief, was to : Moms Against Poverty v. Dep’t of State, 2022 WL 17951329, at *3 . Case docket can be found here.

Counsel later confirmed having used ChatGPT and apologised.