AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (1455 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you have any questions about the database, a FAQ is available here.
And if you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developed an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo.

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

Click to Download CSV
Last updated: 17 May 2026
State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Mata v. Avianca, Inc S.D.N.Y. (USA) 22 June 2023 Lawyer ChatGPT
Fabricated Case Law (10)
False Quotes Case Law (1)
Misrepresented Case Law (7)
Monetary Fine (Lawyers & Firm); Letters to Client/Judges 5000 USD

AI Use

Counsel from Levidow, Levidow & Oberman used ChatGPT for legal research to oppose a motion to dismiss a personal injury claim against Avianca airlines, citing difficulty accessing relevant federal precedent through their limited research subscription.

Hallucination Details

The attorneys' submission included at least six completely non-existent judicial decisions, complete with fabricated quotes and internal citations. Examples cited by the court include Varghese v. China Southern Airlines Co., Ltd., Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Inc., Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines, Inc.. When challenged by opposing counsel and the court, the attorneys initially stood by the fake cases and even submitted purported copies of the opinions, which were also generated by ChatGPT and contained further bogus citations.

Ruling/Sanction

Judge P. Kevin Castel imposed a $5,000 monetary sanction jointly and severally on the two attorneys and their law firm. He also required them to send letters informing their client and each judge whose name was falsely used on the fabricated opinions about the situation.

Key Judicial Reasoning

Judge Castel found the attorneys acted in bad faith, emphasizing their "acts of conscious avoidance and false and misleading statements to the Court" after the issue was raised. The sanctions were imposed not merely for the initial error but for the failure in their gatekeeping roles and their decision to "double down" rather than promptly correcting the record. The opinion detailed the extensive harms caused by submitting fake opinions. This case is widely considered a landmark decision and is frequently cited in subsequent discussions and guidance.

Scott v. Federal National Mortgage Association Maine County (USA) 14 June 2023 Pro Se Litigant Unidentified
Fabricated Case Law (2)
Misrepresented Exhibits or Submissions (1)
Dismissal of Complaint + Sanctions (Attorney's Fees and Costs)
Unknown case Manchester (UK) 29 May 2023 Pro Se Litigant Implied Fabricated citations, misrepresented precedents

Unclear if any formal decision on the matter, the incident was reported in the Law Society Gazette in May 2023 (source).

Nash v. Director of Public Prosecutions Supreme Court of Western Australia - Court of Appeal (Australia) 8 May 2023 Pro Se Litigant Implied Fictitious authorities Appeal dismissed

"Mr Nash is unrepresented. He prepared the appellant's case himself, although it appears that he may have had some assistance with later submissions (including, perhaps, from an artificial intelligence program such as Chat GPT). Neither form of submission made coherent submissions as to why the trial judge's decision was affected by material error or otherwise gave rise to a miscarriage of justice. Nor did the material sought to be adduced by Mr Nash as additional evidence on the appeal disclose any miscarriage of justice.

[...]. There is otherwise no jurisdictional basis to transfer criminal proceedings under State law in this Court to the court of another State. The authorities cited by Mr Nash in support of such jurisdiction do not exist; they are fictitious."

The court dismissed the appeal, finding no merit in the grounds presented, and refused to admit additional evidence. No professional sanctions or monetary penalties were imposed as Nash was a pro se litigant.

Source: Jay Iyer
Nº 0600814-85.2022.6.00.0000 Tribunal Superior Eleitoral (Brazil) 14 April 2023 Lawyer ChatGPT
Fabricated Exhibits or Submissions (1)
Monetary Sanction 2604 BRL

Petitioner (an attorney) filed an amicus request accompanied by a 'fábula' co-written with ChatGPT. TSE held that amicus filings are inapplicable in electoral proceedings (Res.-TSE nº 23.478/2016), found the submission to evidence bad-faith litigation and manifestly unfounded intervention, ordered desentranhamento of the petition and imposed a monetary fine for litigância de má-fé.