This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (5 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| ECLI:NL:RBNNE:2025:4814 | Rechtbank Noord-Nederland (Netherlands) | 19 November 2025 | Lawyer | ChatGPT |
Fabricated
Case Law
(1),
other
(1)
Misrepresented
Case Law
(1)
|
. | — | — | |
| ECLI:NL:RBGEL:2025:9423 | Gelderland (Netherlands) | 6 November 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Court found several cited rulings non-existent or irrelevant, rejected reliance on that case law, and dismissed the appeal. | — | — | |
| The Boys of Rockanje v. Dwaard | Rotterdam D. (Netherlands) | 27 August 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
— | — | ||
|
" When asked, Dwaard (his lawyer) stated that all this was caused by a problem converting a Word file to PDF. This statement raises questions. The court leaves these questions and the question of whether this violated Article 21 of the Dutch Code of Civil Procedure unanswered, because, on balance, Dwaard did not benefit from the incorrect representation of the facts in the statement of defense. " (Google Translate) |
|||||||||
| X BV in Z v. Tax Inspector | The Hague CA (Netherlands) | 26 June 2024 | Lawyer | ChatGPT |
Misrepresented
Case Law
(1),
Exhibits or Submissions
(1)
|
Arguments rejected; No formal sanction but severe judicial criticism. | — | — | |
AI UseThe appellant relied on ChatGPT to generate a list of ten "economically comparable" vehicles for purposes of arguing a lower trade-in value to reduce bpm (car registration tax). The Court noted this explicitly and criticized the mechanical reliance on AI outputs without human verification or contextual adjustment. Hallucination DetailsChatGPT produced a list of luxury and exotic cars supposedly comparable to a Ferrari 812 Superfast. The Court found that mere AI-generated association of vehicles based on "economic context and competition position" is insufficient under EU law principles requiring real-world comparability from the perspective of an average consumer. Ruling/SanctionThe Court rejected the appellant’s valuation arguments wholesale. It stressed that serious, human-verified reference vehicle comparisons were mandatory and that ChatGPT lists could not establish the legally required comparability standard under Dutch and EU law (Art. 110 TFEU). No monetary sanction imposed, but appellant’s entire case collapsed on evidentiary grounds. Key Judicial ReasoningThe Court reasoned that a list generated by an AI program like ChatGPT, without rigorous control or verification, is inadmissible for evidentiary purposes. AI outputs lack the nuanced judgment necessary to assess "similar vehicles" under Art. 110 TFEU and Dutch bpm tax rules. It underscored that the test is based on the perceptions of a human average consumer, not algorithmic proximity. |
|||||||||
| X BV in Z v. Tax Inspector | The Hague CA (Netherlands) | 5 March 2024 | Lawyer | ChatGPT | Use of ChatGPT outputs as evidence without clarity about prompts or verification; no fake cases cited, but reliance on unverifiable AI outputs for valuation arguments | Arguments discounted; No formal sanction but strong judicial criticism | — | — | |
AI UseThe appellant's authorized representative submitted arguments based on ChatGPT outputs attempting to challenge the tax valuation of real property. The representative failed to specify what exact queries were made to ChatGPT, rendering the outputs unverifiable and untrustworthy. Hallucination DetailsNo explicit fabricated case law was cited. Instead, the appellant relied on generalized, unverifiable statements produced by ChatGPT to contest the capitalization factor and COVID-19 valuation discounts applied by the tax authorities. Ruling/SanctionThe Court refused to attribute any evidentiary value to the ChatGPT-based arguments. It found that without disclosure of the input prompts and verification of AI outputs, the content was legally inadmissible as probative material. However, no sanctions were imposed, likely due to the novelty of the misuse and the lack of bad faith. Key Judicial ReasoningThe Court emphasized that judicial proceedings demand verifiable, fact-based arguments. AI outputs that lack transparency (particularly about the underlying prompt and methodology) cannot serve as a substitute for evidence. The judgment explicitly notes that reliance on ChatGPT statements without verifiability "does not affect" the Court’s reasoning or the tax authority's burden of proof. |
|||||||||