This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (831 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Kaur v RMIT | SC Victoria (CA) (Australia) | 11 November 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
— | — | ||
| Vargas v. Salazar | S.D. Texas (USA) | 1 November 2024 | Pro Se Litigant | Implied | Fake citations | Plaintiff ordered to refile submissions without fake citations | — | — | |
| Rajabi c. Lassalle | TAL Montréal (Canada) | 1 November 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
— | — | ||
|
Source: Courtready
|
|||||||||
| Jones v. Simploy | Missouri CA (USA) | 24 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
|
The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court. We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54. We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity." |
|||||||||
| Martin v. Hawai | D. Hawaii (USA) | 20 September 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
False Quotes
Case Law
(2)
Misrepresented
Legal Norm
(2)
|
Warning, and Order to file further submissions with Declaration | — | — | |
| Transamerica Life v. Williams | D. Arizona (USA) | 6 September 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
Misrepresented
Legal Norm
(1)
|
Warning | — | — | |
| Rule v. Braiman | N.D. New York (USA) | 4 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
| N.E.W. Credit Union v. Mehlhorn | Wisconsin C.A. (USA) | 13 August 2024 | Pro Se Litigant | Implied | At least four fictitious cases | Warning | — | — | |
|
The court pointed out: "In its brief, the Credit Union points out that the cases cited by Mehlhorn do not exist and speculates that Mehlhorn used an artificial intelligence program to draft her brief-in-chief. In her reply brief, Mehlhorn does not respond to this assertion. Instead, she cites eight new cases, none of which were referenced in her brief-in-chief. It appears, however, that four of those cases are also fictitious. At a minimum, this court cannot locate those cases using the citations provided. We strongly admonish Mehlhorn for her violations of the Rules of Appellate procedure, and particularly for her citations to what appear to be fictitious cases. Although Mehlhorn is self-represented, pro se appellants “are bound by the same rules that apply to attorneys on appeal.” See Waushara County v. Graf, 166 Wis. 2d 442, 452, 480 N.W.2d 16 (1992). We could summarily dismiss this appeal as a sanction for Mehlhorn’s multiple and egregious rule violations. See WIS. STAT. RULE 809.83(2). Nevertheless, we choose to address the merits of Mehlhorn’s arguments as best as we are able, given the deficiencies in her briefing" |
|||||||||
| Mr D Rollo v. Marstons Trading Ltd | Employment Tribunal (UK) | 1 August 2024 | Pro Se Litigant | ChatGPT |
Misrepresented
Legal Norm
(1)
|
Claim dismissed; AI material excluded from evidence under prior judicial order; no sanction but explicit judicial criticism | — | — | |
AI UseThe claimant sought to rely on a conversation with ChatGPT to show that the respondent’s claims about the difficulty of retrieving archived data were false. Ruling/SanctionNo formal sanction was imposed, but the judgment made clear that ChatGPT outputs are not acceptable as evidence. Key Judicial ReasoningThe Tribunal held that "a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable". |
|||||||||
| Dukuray v. Experian Information Solutions | S.D.N.Y. (USA) | 26 July 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(3),
Legal Norm
(2)
|
No sanction; Formal Warning Issued | — | — | |
AI UsePlaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation. Hallucination DetailsThree nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues. Ruling/SanctionThe court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks. Key Judicial ReasoningReliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused. |
|||||||||
| Joe W. Byrd v. Woodland Springs HA | Texas CA (USA) | 25 July 2024 | Pro Se Litigant | Unidentified | Several garbled or misattributed case citations and vague legal references | No formal sanction | — | — | |
AI UseThe court does not confirm AI use but references a legal article about the dangers of ChatGPT and states: “We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.” Ruling/SanctionThe court affirmed the trial court’s judgment, found no preserved or adequately briefed grounds for appeal, and declined to address the vague or unsupported references. No explicit sanction or costs were imposed for the apparent AI-related deficiencies. |
|||||||||
| Munchang Choi v. Lloyd’s Register Canada Limited | IRB (Canada) | 23 July 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
|
Admonishment | — | — | |
| Anonymous v. NYC Department of Education | S.D.N.Y. (USA) | 18 July 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
No sanction; Formal Warning Issued | — | — | |
AI UseThe plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI. Hallucination DetailsSeveral fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious. Ruling/SanctionNo sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency. Key Judicial ReasoningThe court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct. |
|||||||||
| Lakaev v McConkey | Supreme Court of Tasmania (Australia) | 12 July 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Appeal dismissed for want of prosecution | — | — | |
|
The appellant's submissions included a misleading reference to a High Court case, De L v Director-General, NSW Department of Community Services, misrepresenting its relevance to false testimony, which was not the case's subject matter, and a fabricated reference to Hewitt v Omari [2015] NSWCA 175, which does not exist. The appeal was dismissed, considering the lack of progress and potential prejudice to the respondent. |
|||||||||
| Zeng v. Chell | S.D. New York (USA) | 9 July 2024 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
| Francesco Santora v. Copyright Claims Board | Copyright Claims Board (USA) | 12 June 2024 | Pro Se Litigant | Implied |
Fabricated
Legal Norm
(1)
Misrepresented
Case Law
(1)
|
Barred from pursuing claims before the Board | — | — | |
| Dowlah v. Professional Staff Congress | NY SC (USA) | 30 May 2024 | Pro Se Litigant | Unidentified | Several non-existent cases | Caution to plaintiff | — | — | |
| Robert Lafayette v. Blueprint Basketball et al | Vermont SC (USA) | 26 April 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Order to Show Cause | — | — | |
| Michael Cohen Matter | SDNY (USA) | 20 March 2024 | Pro Se Litigant | Google Bard | 3 fake cases | No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied | — | — | |
AI UseMichael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases. Hallucination DetailsCohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility. Ruling/SanctionJudge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits. Key Judicial ReasoningThe court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification. |
|||||||||
| Martin v. Taylor County | N.D. Texas (USA) | 6 March 2024 | Pro Se Litigant | Implied |
False Quotes
Case Law
(1)
Misrepresented
Legal Norm
(9)
|
Warning | — | — | |
|
In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time." |
|||||||||
| Finch v The Heat Group | Family Court (Australia) | 27 February 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1)
|
— | — | ||
|
Applicant (unrepresented) provided a list of 24 authorities claimed to show instances where MinterEllison had been restrained. Court's associate and judge found the list contained fabricated or misdescribed citations; judge characterised the provision of those authorities as an egregious instance of misleading the court but did not impose professional sanctions. Restraint application dismissed on merits. |
|||||||||
| Kruse v. Karlen | Miss. CA (USA) | 13 February 2024 | Pro Se Litigant | Unidentified | At least twenty-two fabricated case citations and multiple statutory misstatements. | Dismissal of Appeal + Damages Awarded for Frivolous Appeal. | 10000 USD | — | |
AI UseAppellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission. Hallucination DetailsOut of twenty-four total case citations in Karlen’s appellate brief:
Ruling/SanctionThe Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status. Key Judicial ReasoningThe Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers. |
|||||||||
| Harber v. HMRC | (UK) | 4 December 2023 | Pro Se Litigant | Unidentified | 9 Fake Tribunal Decisions | No Sanction on Litigant; Warning implied for lawyers. | — | — | |
AI UseCatherine Harber, a self-represented taxpayer appealing an HMRC penalty, submitted a document citing nine purported First-Tier Tribunal decisions supporting her position regarding "reasonable excuse". She stated the cases were provided by "a friend in a solicitor's office" and acknowledged they might have been generated by AI. ChatGPT was mentioned as a likely source. Hallucination DetailsThe nine cited FTT decisions (names, dates, summaries provided) were found to be non-existent after checks by the Tribunal and HMRC. While plausible, the fake summaries contained anomalies like American spellings and repeated phrases. Some cited cases resembled real ones, but those real cases actually went against the appellant. Ruling/SanctionThe Tribunal factually determined the cited cases were AI-generated hallucinations. It accepted Mrs. Harber was unaware they were fake and did not know how to verify them. Her appeal failed on its merits, unrelated to the AI issue. No sanctions were imposed on the litigant. Key Judicial ReasoningThe Tribunal emphasized that submitting invented judgments was not harmless, citing the waste of public resources (time and money for the Tribunal and HMRC). It explicitly endorsed the concerns raised in the US Mata decision regarding the various harms flowing from fake opinions. While lenient towards the self-represented litigant, the ruling implicitly warned that lawyers would likely face stricter consequences. This was the first reported UK decision finding AI-generated fake cases cited by a litigant |
|||||||||
| Whaley v. Experian Information Solutions | S.D. Ohio (USA) | 16 November 2023 | Pro Se Litigant | Unidentified | Pleadings full of irrelevant info | Warning | — | — | |
| Mescall v. Renaissance at Antiquity | Westner N.C. (USA) | 13 November 2023 | Pro Se Litigant | Unidentified | Unspecified concerns about AI-generated inaccuracies | No sanction; Warning and Leave to Amend Granted | — | — | |
AI UseDefendants alleged that portions of Plaintiff’s response to a motion to dismiss were AI-generated. Hallucination DetailsNo specific fabricated cases or fake quotations were identified. The concern was broader: incoherent and procedurally improper pleadings, compounded by apparent AI usage, which raised ethical red flags. Ruling/SanctionRather than imposing sanctions, the court granted the pro se plaintiff leave to amend the complaint. Plaintiff was warned to comply with procedural rules and to submit a coherent, consolidated amended complaint, or face dismissal. Key Judicial ReasoningThe judge stressed that AI use does not absolve pro se litigants of procedural compliance. Litigants must ensure pleadings are coherent, concise, and legally grounded, regardless of technological tools used. Courts cannot act as de facto advocates or reconstruct fragmented pleadings. |
|||||||||
| Morgan v. Community Against Violence | New Mexico (USA) | 23 October 2023 | Pro Se Litigant | Unidentified | Fake Case Citations | Partial Dismissal + Judicial Warning | — | — | |
AI UsePlaintiff, acting without counsel, submitted briefing that included multiple fabricated or erroneous judicial citations, likely generated by an AI tool used for research or drafting. While the tool itself is not named, the nature and clustering of errors mirror known AI output patterns. Hallucination DetailsCited cases included wholly nonexistent opinions (e.g., "Las Cruces Sun-News v. City of Las Cruces") and real case names with incorrect volume/reporting details (e.g., misattributed circuits or invented page numbers). The citations lacked verifiable authority and were flagged by the court as spurious upon review. Ruling/SanctionThe court dismissed several claims on substantive grounds but issued a sharp warning about the misuse of AI-generated legal citations. While no immediate sanctions were imposed, the judge explicitly referenced Mata v. Avianca and held this instance to be only the second federal case addressing AI hallucinations in pleadings. The plaintiff was cautioned that any recurrence would result in Rule 11 sanctions, including dismissal with prejudice. Key Judicial ReasoningThe opinion stressed that access to courts is not a license to submit fictitious legal materials. Rule 11(b) requires factual and legal support for all filings, and even pro se litigants must adhere to this baseline. The court emphasized judicial efficiency, fairness to the opposing party, and the reputational harm caused by false citations. The misuse of AI was implicitly treated as a form of recklessness or bad faith, not excused by technological ignorance |
|||||||||
|
Source: Volokh
|
|||||||||
| Thomas v. Pangburn | S.D. Ga. (USA) | 6 October 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Legal Norm
(1)
|
Dismissal of Case as Sanction for Bad Faith + Judicial Rebuke | — | — | |
AI UseJerry Thomas filed pro se pleadings citing at least ten fabricated cases. The citations appeared plausible but did not correspond to any real authorities. Despite opportunities to explain, Thomas gave vague excuses about "self-research" and "assumed reliability," without clarifying the sources - suggesting reliance on AI-generated content. Hallucination DetailsTen fake case citations systematically inserted across filings Fabricated authorities mimicked proper citation format but were unverifiable in any recognized database The pattern mirrored known AI hallucination behaviors: fabricated authorities presented with apparent legitimacy Ruling/SanctionThe Court dismissed the action with prejudice as a Rule 11 sanction. It emphasized that fake citations delay litigation, waste judicial resources, and erode public confidence. The Court explicitly invoked Mata v. Avianca for the broader dangers of AI hallucinations in litigation and found Thomas acted in bad faith by failing to properly explain the origin of the fabrications. Key Judicial ReasoningCiting fabricated cases (even if resulting from AI use or negligence) is sanctionable because it constitutes an improper purpose under Rule 11. Sanctions were deemed necessary to deter further abuses, with dismissal considered more appropriate than monetary penalties given the circumstances. |
|||||||||
| Ruggierlo et al. v. Lancaster | E.D. Mich. (USA) | 11 September 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(3)
|
No sanction; Formal Judicial Warning | — | — | |
AI UseLancaster, filing objections to a magistrate judge’s Report and Recommendation, cited several fabricated case authorities. The Court noted the possibility of reliance on a generative AI tool and explicitly warned Lancaster about future misconduct. Hallucination DetailsFabricated or mutant citations, including:
Court highlighted that the majority of the cited cases in Lancaster’s objections were fake. Ruling/SanctionNo immediate sanction imposed due to pro se status and lack of prior warnings. However, the Court issued a pointed warning that citing "made-up law" could lead to significant sanctions, either in that Court or any other court to which the case might be remanded. Key Judicial ReasoningThe Court emphasized that unverified, fabricated legal citations undermine the judicial process and waste both judicial and litigant resources. Even without clear evidence of malicious intent, negligence in checking citations is sanctionable. Rule 11 duties apply fully to pro se litigants. |
|||||||||
| Scott v. Federal National Mortgage Association | Maine County (USA) | 14 June 2023 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
Misrepresented
Exhibits or Submissions
(1)
|
Dismissal of Complaint + Sanctions (Attorney's Fees and Costs) | — | — | |
| Unknown case | Manchester (UK) | 29 May 2023 | Pro Se Litigant | Implied | Fabricated citations, misrepresented precedents | — | — | ||
|
Unclear if any formal decision on the matter, the incident was reported in the Law Society Gazette in May 2023 (source). |
|||||||||
| Nash v. Director of Public Prosecutions | Supreme Court of Western Australia - Court of Appeal (Australia) | 8 May 2023 | Pro Se Litigant | Implied | Fictitious authorities | Appeal dismissed | — | — | |
|
"Mr Nash is unrepresented. He prepared the appellant's case himself, although it appears that he may have had some assistance with later submissions (including, perhaps, from an artificial intelligence program such as Chat GPT). Neither form of submission made coherent submissions as to why the trial judge's decision was affected by material error or otherwise gave rise to a miscarriage of justice. Nor did the material sought to be adduced by Mr Nash as additional evidence on the appeal disclose any miscarriage of justice. [...]. There is otherwise no jurisdictional basis to transfer criminal proceedings under State law in this Court to the court of another State. The authorities cited by Mr Nash in support of such jurisdiction do not exist; they are fictitious." The court dismissed the appeal, finding no merit in the grounds presented, and refused to admit additional evidence. No professional sanctions or monetary penalties were imposed as Nash was a pro se litigant. |
|||||||||
|
Source: Jay Iyer
|
|||||||||