This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (914 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Zachariah Crabill Disciplinary Case | Colorado SC (USA) | 21 November 2023 | Lawyer | ChatGPT | Fake/Incorrect Cases; Lied to Court | 90-day Actual Suspension (+ stayed term, probation) | — | — | |
AI UseAttorney Zachariah C. Crabill, relatively new to civil practice, used ChatGPT to research case law for a motion to set aside judgment, a task he was unfamiliar with and felt pressured to complete quickly. Hallucination DetailsCrabill included incorrect or fictitious case citations provided by ChatGPT in the motion without reading or verifying them. He realized the errors ("garbage" cases, per his texts) before the hearing but did not alert the court or withdraw the motion. Ruling/SanctionWhen questioned by the judge about inaccuracies at the hearing, Crabill falsely blamed a legal intern. He later filed an affidavit admitting his use of ChatGPT and his dishonesty, stating he "panicked" and sought to avoid embarrassment. He stipulated to violating professional duties of competence, diligence, and candor/truthfulness to the court. He received a 366-day suspension, with all but 90 days stayed upon successful completion of a two-year probationary period. This was noted as the first Colorado disciplinary action involving AI misuse. Key Judicial ReasoningThe disciplinary ruling focused on the combination of negligence (failure to verify, violating competence and diligence) and intentional misconduct (lying to the court, violating candor). While mitigating factors (personal challenges, lack of prior discipline) were noted in the stipulated agreement, the dishonesty significantly aggravated the offense. |
|||||||||
| Whaley v. Experian Information | S.D. Ohio (USA) | 16 November 2023 | Pro Se Litigant | Unidentified | Pleadings full of irrelevant info | Warning | — | — | |
| Mescall v. Renaissance at Antiquity | Westner N.C. (USA) | 13 November 2023 | Pro Se Litigant | Unidentified | Unspecified concerns about AI-generated inaccuracies | No sanction; Warning and Leave to Amend Granted | — | — | |
AI UseDefendants alleged that portions of Plaintiff’s response to a motion to dismiss were AI-generated. Hallucination DetailsNo specific fabricated cases or fake quotations were identified. The concern was broader: incoherent and procedurally improper pleadings, compounded by apparent AI usage, which raised ethical red flags. Ruling/SanctionRather than imposing sanctions, the court granted the pro se plaintiff leave to amend the complaint. Plaintiff was warned to comply with procedural rules and to submit a coherent, consolidated amended complaint, or face dismissal. Key Judicial ReasoningThe judge stressed that AI use does not absolve pro se litigants of procedural compliance. Litigants must ensure pleadings are coherent, concise, and legally grounded, regardless of technological tools used. Courts cannot act as de facto advocates or reconstruct fragmented pleadings. |
|||||||||
| In re Celsius Network LLC | S.D. New York (Bankruptcy) (USA) | 9 November 2023 | Expert | Unidentified |
Misrepresented
Exhibits or Submissions
(1),
other
(1)
|
Expert report excluded as unreliable | — | — | |
|
The court found the 172-page valuation report was generated by AI at the expert's instruction, contained almost no supporting citations, factual inaccuracies (e.g., asserting governance rights CEL lacked), duplicated text and incorrect trade dates, and relied on a non-peer-reviewed 'fair value' method. The report was excluded under Rule 702. |
|||||||||
| Morgan v. Community Against Violence | New Mexico (USA) | 23 October 2023 | Pro Se Litigant | Unidentified | Fake Case Citations | Partial Dismissal + Judicial Warning | — | — | |
AI UsePlaintiff, acting without counsel, submitted briefing that included multiple fabricated or erroneous judicial citations, likely generated by an AI tool used for research or drafting. While the tool itself is not named, the nature and clustering of errors mirror known AI output patterns. Hallucination DetailsCited cases included wholly nonexistent opinions (e.g., "Las Cruces Sun-News v. City of Las Cruces") and real case names with incorrect volume/reporting details (e.g., misattributed circuits or invented page numbers). The citations lacked verifiable authority and were flagged by the court as spurious upon review. Ruling/SanctionThe court dismissed several claims on substantive grounds but issued a sharp warning about the misuse of AI-generated legal citations. While no immediate sanctions were imposed, the judge explicitly referenced Mata v. Avianca and held this instance to be only the second federal case addressing AI hallucinations in pleadings. The plaintiff was cautioned that any recurrence would result in Rule 11 sanctions, including dismissal with prejudice. Key Judicial ReasoningThe opinion stressed that access to courts is not a license to submit fictitious legal materials. Rule 11(b) requires factual and legal support for all filings, and even pro se litigants must adhere to this baseline. The court emphasized judicial efficiency, fairness to the opposing party, and the reputational harm caused by false citations. The misuse of AI was implicitly treated as a form of recklessness or bad faith, not excused by technological ignorance |
|||||||||
|
Source: Volokh
|
|||||||||
| Thomas v. Pangburn | S.D. Ga. (USA) | 6 October 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Legal Norm
(1)
|
Dismissal of Case as Sanction for Bad Faith + Judicial Rebuke | — | — | |
AI UseJerry Thomas filed pro se pleadings citing at least ten fabricated cases. The citations appeared plausible but did not correspond to any real authorities. Despite opportunities to explain, Thomas gave vague excuses about "self-research" and "assumed reliability," without clarifying the sources - suggesting reliance on AI-generated content. Hallucination DetailsTen fake case citations systematically inserted across filings Fabricated authorities mimicked proper citation format but were unverifiable in any recognized database The pattern mirrored known AI hallucination behaviors: fabricated authorities presented with apparent legitimacy Ruling/SanctionThe Court dismissed the action with prejudice as a Rule 11 sanction. It emphasized that fake citations delay litigation, waste judicial resources, and erode public confidence. The Court explicitly invoked Mata v. Avianca for the broader dangers of AI hallucinations in litigation and found Thomas acted in bad faith by failing to properly explain the origin of the fabrications. Key Judicial ReasoningCiting fabricated cases (even if resulting from AI use or negligence) is sanctionable because it constitutes an improper purpose under Rule 11. Sanctions were deemed necessary to deter further abuses, with dismissal considered more appropriate than monetary penalties given the circumstances. |
|||||||||
| Ruggierlo et al. v. Lancaster | E.D. Mich. (USA) | 11 September 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(3)
|
No sanction; Formal Judicial Warning | — | — | |
AI UseLancaster, filing objections to a magistrate judge’s Report and Recommendation, cited several fabricated case authorities. The Court noted the possibility of reliance on a generative AI tool and explicitly warned Lancaster about future misconduct. Hallucination DetailsFabricated or mutant citations, including:
Court highlighted that the majority of the cited cases in Lancaster’s objections were fake. Ruling/SanctionNo immediate sanction imposed due to pro se status and lack of prior warnings. However, the Court issued a pointed warning that citing "made-up law" could lead to significant sanctions, either in that Court or any other court to which the case might be remanded. Key Judicial ReasoningThe Court emphasized that unverified, fabricated legal citations undermine the judicial process and waste both judicial and litigant resources. Even without clear evidence of malicious intent, negligence in checking citations is sanctionable. Rule 11 duties apply fully to pro se litigants. |
|||||||||
| Ex Parte Lee | Texas CA (USA) | 19 July 2023 | Lawyer | Unidentified |
Fabricated
Case Law
(3)
|
No sanction; Judicial Warning; Affirmance of Trial Court Decision | — | — | |
AI UseThe Court noted that the appellant's argument section appeared to have been drafted by AI based on telltale errors (nonexistent cases, jump-cites into wrong jurisdictions, illogical structure). A recent Texas CLE on AI usage was cited by the Court to explain the pattern. Hallucination DetailsThree fake cases cited. Brief also contained no citations to the record and was devoid of clear argumentation on the presented issues. Ruling/SanctionThe Court declined to issue a show cause order or to refer counsel to the State Bar of Texas, despite noting similarities to Mata v. Avianca. However, it affirmed the trial court’s denial of habeas relief due to inadequate briefing, and explicitly warned about the dangers of using AI-generated content in legal submissions without human verification. Key Judicial ReasoningThe Court held that even if AI contributed to the preparation of filings, attorneys must ensure accuracy, logical structure, and compliance with citation rules. Failure to meet these standards precludes appellate review under Tex. R. App. P. 38.1(i). Courts are not obligated to "make an appellant’s arguments for him," especially where brief defects are gross. |
|||||||||
| Parker v. Forsyth NNO and Others | Magistrates' Court (South Africa) | 29 June 2023 | Lawyer | ChatGPT |
Fabricated
Case Law
(8)
Misrepresented
Legal Norm
(1)
|
Plaintiff’s claim dismissed; punitive costs awarded on an attorney-and-client scale for specific period due to AI-generated hallucinated case law | — | — | |
AI UseThe plaintiff's attorneys used ChatGPT to generate case law supporting the proposition that a body corporate can be sued for defamation. They forwarded eight cases—none of which exist—to opposing counsel during a post-hearing exchange and were unable to produce them later. Counsel admitted in open court that ChatGPT had been the source. Hallucination DetailsFictitious cases included:
The court verified that the citations, parties, and contents were entirely fictitious. Ruling/SanctionThe plaintiff’s entire claim was dismissed on legal grounds unrelated to the hallucinations (a body corporate cannot be sued for defamation under South African law). Punitive costs were imposed on the attorney-and-client scale for the period between March 28 and May 22, 2023, during which the plaintiff’s legal team insisted such authorities existed. The court awarded 60% of standard costs to the defendants for the rest of the proceedings. No personal sanction or bar referral was issued due to counsel’s candor and the court's confidence that the error stemmed from “overzealous and careless” use of ChatGPT, not intent to mislead Key Judicial ReasoningThe court stressed that AI tools like ChatGPT cannot be trusted for legal citation without human verification. Submitting hallucinated cases—even indirectly—misleads opposing counsel, wastes court time, and undermines trust in legal process. The incident was used to underscore that “good old-fashioned independent reading” remains essential in legal practice. |
|||||||||
| Mata v. Avianca, Inc | S.D.N.Y. (USA) | 22 June 2023 | Lawyer | ChatGPT |
Fabricated
Case Law
(10)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(7)
|
Monetary Fine (Lawyers & Firm); Letters to Client/Judges | 5000 USD | — | |
|
AI Use Counsel from Levidow, Levidow & Oberman used ChatGPT for legal research to oppose a motion to dismiss a personal injury claim against Avianca airlines, citing difficulty accessing relevant federal precedent through their limited research subscription. Hallucination DetailsThe attorneys' submission included at least six completely non-existent judicial decisions, complete with fabricated quotes and internal citations. Examples cited by the court include Varghese v. China Southern Airlines Co., Ltd., Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Inc., Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines, Inc.. When challenged by opposing counsel and the court, the attorneys initially stood by the fake cases and even submitted purported copies of the opinions, which were also generated by ChatGPT and contained further bogus citations. Ruling/SanctionJudge P. Kevin Castel imposed a $5,000 monetary sanction jointly and severally on the two attorneys and their law firm. He also required them to send letters informing their client and each judge whose name was falsely used on the fabricated opinions about the situation. Key Judicial ReasoningJudge Castel found the attorneys acted in bad faith, emphasizing their "acts of conscious avoidance and false and misleading statements to the Court" after the issue was raised. The sanctions were imposed not merely for the initial error but for the failure in their gatekeeping roles and their decision to "double down" rather than promptly correcting the record. The opinion detailed the extensive harms caused by submitting fake opinions. This case is widely considered a landmark decision and is frequently cited in subsequent discussions and guidance. |
|||||||||
| Scott v. Federal National Mortgage Association | Maine County (USA) | 14 June 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
Misrepresented
Exhibits or Submissions
(1)
|
Dismissal of Complaint + Sanctions (Attorney's Fees and Costs) | — | — | |
| Unknown case | Manchester (UK) | 29 May 2023 | Pro Se Litigant | Implied | Fabricated citations, misrepresented precedents | — | — | ||
|
Unclear if any formal decision on the matter, the incident was reported in the Law Society Gazette in May 2023 (source). |
|||||||||
| Nash v. Director of Public Prosecutions | Supreme Court of Western Australia - Court of Appeal (Australia) | 8 May 2023 | Pro Se Litigant | Implied | Fictitious authorities | Appeal dismissed | — | — | |
|
"Mr Nash is unrepresented. He prepared the appellant's case himself, although it appears that he may have had some assistance with later submissions (including, perhaps, from an artificial intelligence program such as Chat GPT). Neither form of submission made coherent submissions as to why the trial judge's decision was affected by material error or otherwise gave rise to a miscarriage of justice. Nor did the material sought to be adduced by Mr Nash as additional evidence on the appeal disclose any miscarriage of justice. [...]. There is otherwise no jurisdictional basis to transfer criminal proceedings under State law in this Court to the court of another State. The authorities cited by Mr Nash in support of such jurisdiction do not exist; they are fictitious." The court dismissed the appeal, finding no merit in the grounds presented, and refused to admit additional evidence. No professional sanctions or monetary penalties were imposed as Nash was a pro se litigant. |
|||||||||
|
Source: Jay Iyer
|
|||||||||
| Nº 0600814-85.2022.6.00.0000 | Tribunal Superior Eleitoral (Brazil) | 14 April 2023 | Lawyer | ChatGPT |
Fabricated
Exhibits or Submissions
(1)
|
Monetary Sanction | 2604 BRL | — | |
|
Petitioner (an attorney) filed an amicus request accompanied by a 'fábula' co-written with ChatGPT. TSE held that amicus filings are inapplicable in electoral proceedings (Res.-TSE nº 23.478/2016), found the submission to evidence bad-faith litigation and manifestly unfounded intervention, ordered desentranhamento of the petition and imposed a monetary fine for litigância de má-fé. |
|||||||||