This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (666 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Michael Cohen Matter | SDNY (USA) | 20 March 2024 | Pro Se Litigant | Google Bard | 3 fake cases | No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied | — | — | |
AI UseMichael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases. Hallucination DetailsCohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility. Ruling/SanctionJudge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits. Key Judicial ReasoningThe court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification. |
|||||||||
| Martin v. Taylor County | N.D. Texas (USA) | 6 March 2024 | Pro Se Litigant | Implied |
False Quotes
Case Law
(1)
Misrepresented
Legal Norm
(9)
|
Warning | — | — | |
|
In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time." |
|||||||||
| Kruse v. Karlen | Miss. CA (USA) | 13 February 2024 | Pro Se Litigant | Unidentified | At least twenty-two fabricated case citations and multiple statutory misstatements. | Dismissal of Appeal + Damages Awarded for Frivolous Appeal. | 10000 USD | — | |
AI UseAppellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission. Hallucination DetailsOut of twenty-four total case citations in Karlen’s appellate brief:
Ruling/SanctionThe Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status. Key Judicial ReasoningThe Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers. |
|||||||||
| Smith v. Farwell | Massachusetts (USA) | 12 February 2024 | Lawyer | Unidentified | 3 fake cases | Monetary Fine (Supervising Lawyer) | 2000 USD | — | |
AI UseIn a wrongful death case, plaintiff's counsel filed four memoranda opposing motions to dismiss. The drafting was done by junior staff (an associate and two recent law school graduates not yet admitted to the bar) who used an unidentified AI system to locate supporting authorities. The supervising attorney signed the filings after reviewing them for style and grammar, but admittedly did not check the accuracy of the citations and was unaware AI had been used. Hallucination DetailsJudge Brian A. Davis noticed citations "seemed amiss" and, after investigation, could not locate three cases cited in the memoranda. These were fictitious federal and state case citations. Ruling/SanctionAfter being questioned, the supervising attorney promptly investigated, admitted the citations were fake and AI-generated, expressed sincere contrition, and explained his lack of familiarity with AI risks. Despite accepting the attorney's candor and lack of intent to mislead, Judge Davis imposed a $2,000 monetary sanction on the supervising counsel, payable to the court. Key Judicial ReasoningThe court found that sanctions were warranted because counsel failed to take "basic, necessary precautions" (i.e., verifying citations) before filing. While the sanction was deemed "mild" due to the attorney's candor and unfamiliarity with AI (distinguishing it from Mata's bad faith finding), the court issued a strong warning that a defense based on ignorance "will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known". The case underscores the supervisory responsibilities of senior attorneys. |
|||||||||
| Park v. Kim | 2nd. Cir. CA (USA) | 30 January 2024 | Lawyer | ChatGPT |
Fabricated
Case Law
(1)
|
Referral to Grievance Panel + Order to Disclose Misconduct to Client. | — | — | |
AI UseCounsel admitted using ChatGPT to find supporting case law after failing to locate precedent manually. She cited a fictitious case (Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)) in the reply brief, never verifying its existence. Hallucination DetailsOnly one hallucinated case was cited in the reply brief: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014). When asked to produce the case, Counsel admitted it did not exist, blaming reliance on ChatGPT. Ruling/SanctionThe Court referred Counsel to the Second Circuit’s Grievance Panel for further investigation and possible discipline. Lee was ordered to furnish a copy of the decision (translated if necessary) to her client and to file certification of compliance. Key Judicial ReasoningThe Court emphasized that attorneys must personally verify the existence and accuracy of all authorities cited. Rule 11 requires a reasonable inquiry, and no technological novelty excuses failing to meet that standard. The Second Circuit cited Mata v. Avianca approvingly, confirming that citing fake cases amounts to abusing the adversarial system. |
|||||||||
| Matter of Samuel | NY Country Court (USA) | 11 January 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1),
Legal Norm
(7)
|
Striking of Filing + Sanctions Hearing Scheduled | — | — | |
AI UseOsborne’s attorney, under time pressure, submitted reply papers heavily relying on a website or tool that used generative AI. The submission included fabricated judicial authorities presented without independent verification. No admission by the lawyer was recorded, but the court independently verified the error. Hallucination DetailsOf the six cases cited in the October 11, 2023 reply, five were found to be either fictional or materially erroneous. A basic Lexis search would have revealed the fabrications instantly. The court drew explicit comparisons to the Mata v. Avianca fiasco. Ruling/SanctionThe court struck the offending reply papers from the record and ordered the attorney to appear for a sanctions hearing under New York’s Rule 130-1.1. Potential sanctions include financial penalties or other disciplinary measures. Key Judicial ReasoningThe court emphasized that while the use of AI tools is not forbidden per se, attorneys must personally verify all outputs. The violation was deemed "frivolous conduct" because the lawyer falsely certified the validity of the filing. The judge stressed the dangers to the judicial system from fictional citations: wasting time, misleading parties, degrading trust in courts, and harming the profession’s reputation. |
|||||||||
| Zachariah Crabill Disciplinary Case | Colorado SC (USA) | 21 November 2023 | Lawyer | ChatGPT | Fake/Incorrect Cases; Lied to Court | 90-day Actual Suspension (+ stayed term, probation) | — | — | |
AI UseAttorney Zachariah C. Crabill, relatively new to civil practice, used ChatGPT to research case law for a motion to set aside judgment, a task he was unfamiliar with and felt pressured to complete quickly. Hallucination DetailsCrabill included incorrect or fictitious case citations provided by ChatGPT in the motion without reading or verifying them. He realized the errors ("garbage" cases, per his texts) before the hearing but did not alert the court or withdraw the motion. Ruling/SanctionWhen questioned by the judge about inaccuracies at the hearing, Crabill falsely blamed a legal intern. He later filed an affidavit admitting his use of ChatGPT and his dishonesty, stating he "panicked" and sought to avoid embarrassment. He stipulated to violating professional duties of competence, diligence, and candor/truthfulness to the court. He received a 366-day suspension, with all but 90 days stayed upon successful completion of a two-year probationary period. This was noted as the first Colorado disciplinary action involving AI misuse. Key Judicial ReasoningThe disciplinary ruling focused on the combination of negligence (failure to verify, violating competence and diligence) and intentional misconduct (lying to the court, violating candor). While mitigating factors (personal challenges, lack of prior discipline) were noted in the stipulated agreement, the dishonesty significantly aggravated the offense. |
|||||||||
| Whaley v. Experian Information Solutions | S.D. Ohio (USA) | 16 November 2023 | Pro Se Litigant | Unidentified | Pleadings full of irrelevant info | Warning | — | — | |
| Mescall v. Renaissance at Antiquity | Westner N.C. (USA) | 13 November 2023 | Pro Se Litigant | Unidentified | Unspecified concerns about AI-generated inaccuracies | No sanction; Warning and Leave to Amend Granted | — | — | |
AI UseDefendants alleged that portions of Plaintiff’s response to a motion to dismiss were AI-generated. Hallucination DetailsNo specific fabricated cases or fake quotations were identified. The concern was broader: incoherent and procedurally improper pleadings, compounded by apparent AI usage, which raised ethical red flags. Ruling/SanctionRather than imposing sanctions, the court granted the pro se plaintiff leave to amend the complaint. Plaintiff was warned to comply with procedural rules and to submit a coherent, consolidated amended complaint, or face dismissal. Key Judicial ReasoningThe judge stressed that AI use does not absolve pro se litigants of procedural compliance. Litigants must ensure pleadings are coherent, concise, and legally grounded, regardless of technological tools used. Courts cannot act as de facto advocates or reconstruct fragmented pleadings. |
|||||||||
| In re Celsius Network LLC | S.D. New York (Bankruptcy) (USA) | 9 November 2023 | Expert | Unidentified |
Misrepresented
Exhibits or Submissions
(1),
other
(1)
|
Expert report excluded as unreliable | — | — | |
|
The court found the 172-page valuation report was generated by AI at the expert's instruction, contained almost no supporting citations, factual inaccuracies (e.g., asserting governance rights CEL lacked), duplicated text and incorrect trade dates, and relied on a non-peer-reviewed 'fair value' method. The report was excluded under Rule 702. Counsel at fault was later sanctioned both in Texas, and in New York through a reciprocal sanction (see here). |
|||||||||
| Morgan v. Community Against Violence | New Mexico (USA) | 23 October 2023 | Pro Se Litigant | Unidentified | Fake Case Citations | Partial Dismissal + Judicial Warning | — | — | |
AI UsePlaintiff, acting without counsel, submitted briefing that included multiple fabricated or erroneous judicial citations, likely generated by an AI tool used for research or drafting. While the tool itself is not named, the nature and clustering of errors mirror known AI output patterns. Hallucination DetailsCited cases included wholly nonexistent opinions (e.g., "Las Cruces Sun-News v. City of Las Cruces") and real case names with incorrect volume/reporting details (e.g., misattributed circuits or invented page numbers). The citations lacked verifiable authority and were flagged by the court as spurious upon review. Ruling/SanctionThe court dismissed several claims on substantive grounds but issued a sharp warning about the misuse of AI-generated legal citations. While no immediate sanctions were imposed, the judge explicitly referenced Mata v. Avianca and held this instance to be only the second federal case addressing AI hallucinations in pleadings. The plaintiff was cautioned that any recurrence would result in Rule 11 sanctions, including dismissal with prejudice. Key Judicial ReasoningThe opinion stressed that access to courts is not a license to submit fictitious legal materials. Rule 11(b) requires factual and legal support for all filings, and even pro se litigants must adhere to this baseline. The court emphasized judicial efficiency, fairness to the opposing party, and the reputational harm caused by false citations. The misuse of AI was implicitly treated as a form of recklessness or bad faith, not excused by technological ignorance |
|||||||||
|
Source: Volokh
|
|||||||||
| Thomas v. Pangburn | S.D. Ga. (USA) | 6 October 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Legal Norm
(1)
|
Dismissal of Case as Sanction for Bad Faith + Judicial Rebuke | — | — | |
AI UseJerry Thomas filed pro se pleadings citing at least ten fabricated cases. The citations appeared plausible but did not correspond to any real authorities. Despite opportunities to explain, Thomas gave vague excuses about "self-research" and "assumed reliability," without clarifying the sources - suggesting reliance on AI-generated content. Hallucination DetailsTen fake case citations systematically inserted across filings Fabricated authorities mimicked proper citation format but were unverifiable in any recognized database The pattern mirrored known AI hallucination behaviors: fabricated authorities presented with apparent legitimacy Ruling/SanctionThe Court dismissed the action with prejudice as a Rule 11 sanction. It emphasized that fake citations delay litigation, waste judicial resources, and erode public confidence. The Court explicitly invoked Mata v. Avianca for the broader dangers of AI hallucinations in litigation and found Thomas acted in bad faith by failing to properly explain the origin of the fabrications. Key Judicial ReasoningCiting fabricated cases (even if resulting from AI use or negligence) is sanctionable because it constitutes an improper purpose under Rule 11. Sanctions were deemed necessary to deter further abuses, with dismissal considered more appropriate than monetary penalties given the circumstances. |
|||||||||
| Ruggierlo et al. v. Lancaster | E.D. Mich. (USA) | 11 September 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(3)
|
No sanction; Formal Judicial Warning | — | — | |
AI UseLancaster, filing objections to a magistrate judge’s Report and Recommendation, cited several fabricated case authorities. The Court noted the possibility of reliance on a generative AI tool and explicitly warned Lancaster about future misconduct. Hallucination DetailsFabricated or mutant citations, including:
Court highlighted that the majority of the cited cases in Lancaster’s objections were fake. Ruling/SanctionNo immediate sanction imposed due to pro se status and lack of prior warnings. However, the Court issued a pointed warning that citing "made-up law" could lead to significant sanctions, either in that Court or any other court to which the case might be remanded. Key Judicial ReasoningThe Court emphasized that unverified, fabricated legal citations undermine the judicial process and waste both judicial and litigant resources. Even without clear evidence of malicious intent, negligence in checking citations is sanctionable. Rule 11 duties apply fully to pro se litigants. |
|||||||||
| Ex Parte Lee | Texas CA (USA) | 19 July 2023 | Lawyer | Unidentified |
Fabricated
Case Law
(3)
|
No sanction; Judicial Warning; Affirmance of Trial Court Decision | — | — | |
AI UseThe Court noted that the appellant's argument section appeared to have been drafted by AI based on telltale errors (nonexistent cases, jump-cites into wrong jurisdictions, illogical structure). A recent Texas CLE on AI usage was cited by the Court to explain the pattern. Hallucination DetailsThree fake cases cited. Brief also contained no citations to the record and was devoid of clear argumentation on the presented issues. Ruling/SanctionThe Court declined to issue a show cause order or to refer counsel to the State Bar of Texas, despite noting similarities to Mata v. Avianca. However, it affirmed the trial court’s denial of habeas relief due to inadequate briefing, and explicitly warned about the dangers of using AI-generated content in legal submissions without human verification. Key Judicial ReasoningThe Court held that even if AI contributed to the preparation of filings, attorneys must ensure accuracy, logical structure, and compliance with citation rules. Failure to meet these standards precludes appellate review under Tex. R. App. P. 38.1(i). Courts are not obligated to "make an appellant’s arguments for him," especially where brief defects are gross. |
|||||||||
| Mata v. Avianca, Inc | S.D.N.Y. (USA) | 22 June 2023 | Lawyer | ChatGPT |
Fabricated
Case Law
(10)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(7)
|
Monetary Fine (Lawyers & Firm); Letters to Client/Judges | 5000 USD | — | |
|
AI Use Counsel from Levidow, Levidow & Oberman used ChatGPT for legal research to oppose a motion to dismiss a personal injury claim against Avianca airlines, citing difficulty accessing relevant federal precedent through their limited research subscription. Hallucination DetailsThe attorneys' submission included at least six completely non-existent judicial decisions, complete with fabricated quotes and internal citations. Examples cited by the court include Varghese v. China Southern Airlines Co., Ltd., Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Inc., Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines, Inc.. When challenged by opposing counsel and the court, the attorneys initially stood by the fake cases and even submitted purported copies of the opinions, which were also generated by ChatGPT and contained further bogus citations. Ruling/SanctionJudge P. Kevin Castel imposed a $5,000 monetary sanction jointly and severally on the two attorneys and their law firm. He also required them to send letters informing their client and each judge whose name was falsely used on the fabricated opinions about the situation. Key Judicial ReasoningJudge Castel found the attorneys acted in bad faith, emphasizing their "acts of conscious avoidance and false and misleading statements to the Court" after the issue was raised. The sanctions were imposed not merely for the initial error but for the failure in their gatekeeping roles and their decision to "double down" rather than promptly correcting the record. The opinion detailed the extensive harms caused by submitting fake opinions. This case is widely considered a landmark decision and is frequently cited in subsequent discussions and guidance. |
|||||||||
| Scott v. Federal National Mortgage Association | Maine County (USA) | 14 June 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
Misrepresented
Exhibits or Submissions
(1)
|
Dismissal of Complaint + Sanctions (Attorney's Fees and Costs) | — | — | |