This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (673 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Matter of Weber | NY County Court (USA) | 10 October 2024 | Expert | MS Copilot | Unverifiable AI Calculation Process | AI-assisted Evidence Inadmissible; Affirmative Duty to Disclose AI Use for Evidence Established. | — | — | |
AI UseIn a trust accounting proceeding, the objectant's damages expert testified that he used Microsoft Copilot (described as an AI chatbot) to cross-check his damages calculations presented in a supplemental report. Hallucination DetailsThe issue wasn't fabricated citations, but the reliability and verifiability of the AI's calculation process. The expert could not recall the specific prompts used, nor could he explain Copilot's underlying sources or methodology. He claimed using AI tools was generally accepted in his field but offered no proof. Ruling/SanctionThe court had already found the expert's analysis unreliable on other grounds, but specifically addressed the AI use. The court attempted to replicate the expert's results using Copilot itself, obtaining different outputs and eliciting warnings from Copilot about the need for expert verification before court use. The court held, potentially as an issue of first impression in that court, that counsel has an affirmative duty to disclose the use of AI in generating evidence prior to its introduction, due to AI's rapid evolution and reliability issues. AI-generated evidence would be subject to a Frye hearing (standard for admissibility of scientific evidence in NY). The expert's AI-assisted calculations were deemed inadmissible. Key Judicial ReasoningThe court emphasized the "garbage in, garbage out" principle, stressing the need for users to understand AI inputs and processes. It stated that the mere fact AI is used does not make its output admissible; reliability must be established. The lack of transparency regarding the AI's process was a key factor in finding the evidence unreliable. |
|||||||||
| Iovino v. Michael Stapleton Associates, Ltd. | Western Virginia (USA) | 10 October 2024 | Lawyer | Claude, Westlaw, LexisNexis |
Fabricated
Case Law
(2)
False Quotes
Case Law
(2)
Misrepresented
Case Law
(1)
|
No sanction, but hearing transcript sent to bar authorities | — | — | |
|
Show cause order is here. Counsel responded to Show Cause order in this document. Show cause hearing transcript is here. |
|||||||||
| Jones v. Simploy | Missouri CA (USA) | 24 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
|
The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court. We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54. We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity." |
|||||||||
| Martin v. Hawai | D. Hawaii (USA) | 20 September 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
False Quotes
Case Law
(2)
Misrepresented
Legal Norm
(2)
|
Warning, and Order to file further submissions with Declaration | — | — | |
| Transamerica Life v. Williams | D. Arizona (USA) | 6 September 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
Misrepresented
Legal Norm
(1)
|
Warning | — | — | |
| Rule v. Braiman | N.D. New York (USA) | 4 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
| USA v. Michel | D.C. (USA) | 30 August 2024 | Lawyer | EyeLevel |
False Quotes
Exhibits or Submissions
(1)
|
Misattribution was irrelevant | — | — | |
|
As acknowledged by Counsel, he also used AI to generate parts of his pleadings. |
|||||||||
| Rasmussen v. Rasmussen | California (USA) | 23 August 2024 | Lawyer | Implied |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(4)
|
Lawyer ordered to show cause why she should not be referred to the bar | — | — | |
|
While the Court initially organised show cause proceedings leading to potential sanctions, the case was eventually settled. Nevertheless, the Court stated that it "intends to report Ms. Rasmussen’s use of mis-cited and nonexistent cases in the demurrer to the State Bar", unless she objected to "this tentative ruling". |
|||||||||
| N.E.W. Credit Union v. Mehlhorn | Wisconsin C.A. (USA) | 13 August 2024 | Pro Se Litigant | Implied | At least four fictitious cases | Warning | — | — | |
|
The court pointed out: "In its brief, the Credit Union points out that the cases cited by Mehlhorn do not exist and speculates that Mehlhorn used an artificial intelligence program to draft her brief-in-chief. In her reply brief, Mehlhorn does not respond to this assertion. Instead, she cites eight new cases, none of which were referenced in her brief-in-chief. It appears, however, that four of those cases are also fictitious. At a minimum, this court cannot locate those cases using the citations provided. We strongly admonish Mehlhorn for her violations of the Rules of Appellate procedure, and particularly for her citations to what appear to be fictitious cases. Although Mehlhorn is self-represented, pro se appellants “are bound by the same rules that apply to attorneys on appeal.” See Waushara County v. Graf, 166 Wis. 2d 442, 452, 480 N.W.2d 16 (1992). We could summarily dismiss this appeal as a sanction for Mehlhorn’s multiple and egregious rule violations. See WIS. STAT. RULE 809.83(2). Nevertheless, we choose to address the merits of Mehlhorn’s arguments as best as we are able, given the deficiencies in her briefing" |
|||||||||
| Dukuray v. Experian Information Solutions | S.D.N.Y. (USA) | 26 July 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(3),
Legal Norm
(2)
|
No sanction; Formal Warning Issued | — | — | |
AI UsePlaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation. Hallucination DetailsThree nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues. Ruling/SanctionThe court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks. Key Judicial ReasoningReliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused. |
|||||||||
| Joe W. Byrd v. Woodland Springs HA | Texas CA (USA) | 25 July 2024 | Pro Se Litigant | Unidentified | Several garbled or misattributed case citations and vague legal references | No formal sanction | — | — | |
AI UseThe court does not confirm AI use but references a legal article about the dangers of ChatGPT and states: “We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.” Ruling/SanctionThe court affirmed the trial court’s judgment, found no preserved or adequately briefed grounds for appeal, and declined to address the vague or unsupported references. No explicit sanction or costs were imposed for the apparent AI-related deficiencies. |
|||||||||
| Anonymous v. NYC Department of Education | S.D.N.Y. (USA) | 18 July 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
No sanction; Formal Warning Issued | — | — | |
AI UseThe plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI. Hallucination DetailsSeveral fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious. Ruling/SanctionNo sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency. Key Judicial ReasoningThe court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct. |
|||||||||
| Zeng v. Chell | S.D. New York (USA) | 9 July 2024 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
| Dowlah v. Professional Staff Congress | NY SC (USA) | 30 May 2024 | Pro Se Litigant | Unidentified | Several non-existent cases | Caution to plaintiff | — | — | |
| Robert Lafayette v. Blueprint Basketball et al | Vermont SC (USA) | 26 April 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Order to Show Cause | — | — | |
| Plumbers & Gasfitters Union v. Morris Plumbing | E.D. Wisconsin (USA) | 18 April 2024 | Lawyer | Implied | 1 fake citation | Warning | — | — | |
| Grant v. City of Long Beach | 9th Cir. CA (USA) | 22 March 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(13)
|
Striking of Brief + Dismissal of Appeal | — | — | |
AI UseThe appellants’ lawyer submitted an opening brief riddled with hallucinated cases and mischaracterizations. The court did not directly investigate the technological origin but cited the systematic errors as consistent with known AI-generated hallucination patterns. Hallucination DetailsTwo cited cases were completely nonexistent. Additionally, a dozen cited decisions were badly misrepresented, e.g., Hydrick v. Hunter and Wall v. County of Orange were cited for parent–child removal claims when they had nothing to do with such issues. Ruling/SanctionThe Ninth Circuit struck the appellants' opening brief under Circuit Rule 28–1 and dismissed the appeal. The panel emphasized that fabricated citations and grotesque misrepresentations violate Rule 28(a)(8)(A) requirements for arguments with coherent citation support. |
|||||||||
| Michael Cohen Matter | SDNY (USA) | 20 March 2024 | Pro Se Litigant | Google Bard | 3 fake cases | No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied | — | — | |
AI UseMichael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases. Hallucination DetailsCohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility. Ruling/SanctionJudge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits. Key Judicial ReasoningThe court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification. |
|||||||||
| Martin v. Taylor County | N.D. Texas (USA) | 6 March 2024 | Pro Se Litigant | Implied |
False Quotes
Case Law
(1)
Misrepresented
Legal Norm
(9)
|
Warning | — | — | |
|
In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time." |
|||||||||
| Kruse v. Karlen | Miss. CA (USA) | 13 February 2024 | Pro Se Litigant | Unidentified | At least twenty-two fabricated case citations and multiple statutory misstatements. | Dismissal of Appeal + Damages Awarded for Frivolous Appeal. | 10000 USD | — | |
AI UseAppellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission. Hallucination DetailsOut of twenty-four total case citations in Karlen’s appellate brief:
Ruling/SanctionThe Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status. Key Judicial ReasoningThe Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers. |
|||||||||
| Smith v. Farwell | Massachusetts (USA) | 12 February 2024 | Lawyer | Unidentified | 3 fake cases | Monetary Fine (Supervising Lawyer) | 2000 USD | — | |
AI UseIn a wrongful death case, plaintiff's counsel filed four memoranda opposing motions to dismiss. The drafting was done by junior staff (an associate and two recent law school graduates not yet admitted to the bar) who used an unidentified AI system to locate supporting authorities. The supervising attorney signed the filings after reviewing them for style and grammar, but admittedly did not check the accuracy of the citations and was unaware AI had been used. Hallucination DetailsJudge Brian A. Davis noticed citations "seemed amiss" and, after investigation, could not locate three cases cited in the memoranda. These were fictitious federal and state case citations. Ruling/SanctionAfter being questioned, the supervising attorney promptly investigated, admitted the citations were fake and AI-generated, expressed sincere contrition, and explained his lack of familiarity with AI risks. Despite accepting the attorney's candor and lack of intent to mislead, Judge Davis imposed a $2,000 monetary sanction on the supervising counsel, payable to the court. Key Judicial ReasoningThe court found that sanctions were warranted because counsel failed to take "basic, necessary precautions" (i.e., verifying citations) before filing. While the sanction was deemed "mild" due to the attorney's candor and unfamiliarity with AI (distinguishing it from Mata's bad faith finding), the court issued a strong warning that a defense based on ignorance "will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known". The case underscores the supervisory responsibilities of senior attorneys. |
|||||||||
| Park v. Kim | 2nd. Cir. CA (USA) | 30 January 2024 | Lawyer | ChatGPT |
Fabricated
Case Law
(1)
|
Referral to Grievance Panel + Order to Disclose Misconduct to Client. | — | — | |
AI UseCounsel admitted using ChatGPT to find supporting case law after failing to locate precedent manually. She cited a fictitious case (Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)) in the reply brief, never verifying its existence. Hallucination DetailsOnly one hallucinated case was cited in the reply brief: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014). When asked to produce the case, Counsel admitted it did not exist, blaming reliance on ChatGPT. Ruling/SanctionThe Court referred Counsel to the Second Circuit’s Grievance Panel for further investigation and possible discipline. Lee was ordered to furnish a copy of the decision (translated if necessary) to her client and to file certification of compliance. Key Judicial ReasoningThe Court emphasized that attorneys must personally verify the existence and accuracy of all authorities cited. Rule 11 requires a reasonable inquiry, and no technological novelty excuses failing to meet that standard. The Second Circuit cited Mata v. Avianca approvingly, confirming that citing fake cases amounts to abusing the adversarial system. |
|||||||||
| Matter of Samuel | NY Country Court (USA) | 11 January 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1),
Legal Norm
(7)
|
Striking of Filing + Sanctions Hearing Scheduled | — | — | |
AI UseOsborne’s attorney, under time pressure, submitted reply papers heavily relying on a website or tool that used generative AI. The submission included fabricated judicial authorities presented without independent verification. No admission by the lawyer was recorded, but the court independently verified the error. Hallucination DetailsOf the six cases cited in the October 11, 2023 reply, five were found to be either fictional or materially erroneous. A basic Lexis search would have revealed the fabrications instantly. The court drew explicit comparisons to the Mata v. Avianca fiasco. Ruling/SanctionThe court struck the offending reply papers from the record and ordered the attorney to appear for a sanctions hearing under New York’s Rule 130-1.1. Potential sanctions include financial penalties or other disciplinary measures. Key Judicial ReasoningThe court emphasized that while the use of AI tools is not forbidden per se, attorneys must personally verify all outputs. The violation was deemed "frivolous conduct" because the lawyer falsely certified the validity of the filing. The judge stressed the dangers to the judicial system from fictional citations: wasting time, misleading parties, degrading trust in courts, and harming the profession’s reputation. |
|||||||||