This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (594 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Bauche v. Commissioner of Internal Revenue | US Tax Court (USA) | 20 May 2025 | Pro Se Litigant | Implied | Nonexistent cases | Warning | — | — | |
|
" While in our discretion we will not impose sanctions on petitioner, who is proceeding pro se, we warn petitioner that continuing to cite nonexistent caselaw could result in the imposition of sanctions in the future. " |
|||||||||
| Gjovik v. Apple Inc. | N.D. California (USA) | 19 May 2025 | Pro Se Litigant | Unidentified | Fabricated citation(s) | No sanctions imposed, but warning issued | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Ehrlich v. Israel National Academy of Sciences et al. | Israel (Israel) | 18 May 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
Request dismissed on the merits, monetary sanction | 500 ILS | — | |
|
Applicant sought an administrative court order to force the Israel National Academy of Sciences to let him speak at a conference on "Artificial Intelligence and Research: Uses, Prospects, Dangers". This was dismissed, with the court adding: " I will add this: As mentioned above, the subject of the conference where the applicant wishes to speak concerns, among other things, the dangers of artificial intelligence. Indeed, one of these dangers materialized in the applicant's request: He, who is not represented, stated clearly and fairly that he used artificial intelligence for his request. An examination of the request shows that it consequently suffered from 'AI hallucinations' – it mentioned many "judgments" that never came into existence (Regarding this problem, see: HCJ 38379-12-24 Anonymous v. The Sharia Court of Appeals Jerusalem, paragraphs 13-12 (23.2.2025) (hereinafter: the Anonymous matter); HCJ 23602-01-25 The Association for the Advancement of Dog Rights v. The Minister of Agriculture, paragraphs 12-11 (28.2.2025) (hereinafter: the Association matter); and regarding the mentioned problem and the possibility of participating in the conference, see: Babylonian Talmud, Gittin 43a). Just recently, this Court warned, in no uncertain terms, that alongside the blessings of artificial intelligence, one must take excellent care against its pitfalls; 'Eat its inside, throw away its peel' (Anonymous matter, paragraph 26; Association matter, paragraph 19). The applicant did state, clearly, that he used artificial intelligence, and in light of this, he further requested that if a 'technical' error occurred under his hand – it should be seen as a good-faith mistake, not to be held against him. I cannot accept such a request. It does not cure the problems of hallucinating artificial intelligence. Those addressing this Court, whether represented or unrepresented alike, bear the burden of examining whether the precedents they refer to - which are not a 'technical' matter, but rather the beating heart of the pleadings - indeed exist, and substantiate their claims. For this reason too - the request must be dismissed" (Translation by Gemini 2.5). |
|||||||||
| Beenshoof v. Chin | W.D. Washington (USA) | 15 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
No sanction imposed; court reminded Plaintiff of Rule 11 obligations | — | — | |
AI UseThe plaintiff, proceeding pro se, cited “Darling v. Linde, Inc., No. 21-cv-01258, 2023 WL 2320117 (D. Or. Feb. 28, 2023)” in briefing. The court stated it could not locate the case in any major legal database or via internet search and noted this could trigger Rule 11 sanctions if not based on a reasonable inquiry. The ruling cited Saxena v. Martinez-Hernandez as a cautionary example involving AI hallucinations, suggesting the court suspected similar conduct here. |
|||||||||
| Bandla v. Solicitors Regulation Authority | UK (UK) | 13 May 2025 | Pro Se Litigant | Google Search (Allegedly) |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1),
Legal Norm
(2)
|
Application for extension of time refused; appeal struck out as abuse of process; indemnity costs of £24,727.20 ordered; permission to appeal denied | 24727 GBP | — | |
AI UseBandla denied using AI, claiming instead to have relied on Google searches to locate “supportive” case law. He admitted that he did not verify any of the citations and never checked them against official sources. The court found this unacceptable, particularly from someone formerly admitted as a solicitor. Hallucination DetailsBandla’s submissions cited at least 27 cases which the Solicitors Regulation Authority (SRA) could not locate. Bandla maintained summaries and quotations from these cases in formal submissions. When pressed in court, he admitted having never read the judgments, let alone verified their existence. Ruling/SanctionThe High Court refused the application for an extension of time, finding Bandla’s explanations inconsistent and unreliable. The court independently struck out the appeal on grounds of abuse of process due to the submission of fake authority. It imposed indemnity costs of £24,727.20. The judge emphasized that even after being alerted to the fictitious nature of the cases, Bandla neither withdrew nor corrected them. Key Judicial ReasoningThe court found Bandla’s conduct deeply troubling, noting his previous experience as a solicitor and his professed commitment to legal standards. It held that the deliberate or grossly negligent inclusion of fake case law—especially in an attempt to challenge a disciplinary disbarment—was an abuse requiring strong institutional response. |
|||||||||
| Department of Justice v Wise | Queensland Civil and Administrative Tribunal (Australia) | 13 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(3),
Legal Norm
(2)
|
Warning | — | — | |
|
The second respondent, Carly Dakota Wise, a self-represented litigant, filed an application for the recusal of a tribunal member, alleging bias and procedural unfairness. The application was based on several grounds, including fabricated legal citations - despite Ms. Wise having been warned in interlocutory proceedings to check the authorities she relied on. The court cited the local Guidelines for the Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers, available here, to stress that self-represented litigants need to check the accuracy of their pleadings. |
|||||||||
| Newbern v. Desoto County School District et al. | N.D. Mississippi (USA) | 12 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1),
other
(1)
|
Case dismissed, in part as a sanction for fabrication of legal authorities | — | — | |
AI UseThe court found that several of the cases cited by the plaintiff in her briefing opposing Officer Hill’s qualified immunity defense did not exist. Although Newbern suggested the citations may have been innocent mistakes, she did not challenge the finding of fabrication. No AI tool was admitted or named, but the structure and specificity of the invented cases strongly suggest generative AI use. Hallucination DetailsThe fabricated authorities were not background references, but “key authorities” cited to establish that Hill’s alleged conduct violated clearly established law. The court observed that the fake cases initially appeared to be unusually on-point compared to the rest of plaintiff’s citations, which raised suspicion. Upon scrutiny, it confirmed they did not exist. Ruling/SanctionThe court dismissed the federal claims against Officer Hill as a partial sanction for plaintiff’s fabrication of legal authority and failure to meet the burden under qualified immunity. However, it declined to dismiss the entire case, citing the interest of the minor child involved and the relevance of potential state law claims. It permitted discovery to proceed on those claims to determine whether Officer Hill acted with malice or engaged in other conduct falling outside the scope of Mississippi Tort Claims Act immunity. Key Judicial ReasoningThe court found that plaintiff’s citation of fictitious cases undermined her effort to meet the demanding “clearly established” standard. It rejected her claim that the fabrication was an innocent mistake and viewed it in light of her broader litigation conduct, which included excessive filings and disregard for procedural limits. |
|||||||||
| Crypto Open Patent Alliance v. Wright (2) | UK (UK) | 12 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(4),
Legal Norm
(2),
other
(2)
|
General Civil Restraint Order (GCRO) granted for 3 years; Case referred to Attorney General; Costs awarded to applicants. | 100000 GBP | — | |
AI UseDr. Wright, after beginning to represent himself, repeatedly used AI engines (such as ChatGPT or similar) to generate legal documents. These documents were characterized by the court as "highly verbose and repetitious" and full of "legal nonsense". This use of AI contributed to filings containing numerous false references to authority and misrepresentations of existing law. Hallucination DetailsWhile the core issue in Dr. Wright's litigation was his fundamental dishonesty (claiming to be Satoshi Nakamoto based on "lies and ... elaborately forged documents" ), the use of AI introduced specific problems. His appeal documents, bearing signs of AI creation, contained "numerous false references to authority". His later submissions also involved "citation of non-existent authorities". This AI-driven production of flawed legal arguments formed part of his broader pattern of disrespect for court rules and process. Ruling/SanctionMr Justice Mellor granted a General Civil Restraint Order (GCRO) against Dr. Wright for a three-year period. He found that an Extended CRO (ECRO) would be insufficient given the scope and persistence of Dr. Wright's abusive litigation. The court also referred Dr. Wright's conduct to the Attorney General for consideration of a civil proceedings order under s.42 of the Senior Courts Act 1981. Dr. Wright was ordered to pay the applicants' costs for the CRO application, summarily assessed at £100,000. Key Judicial ReasoningThe court found "overwhelming" evidence that Dr. Wright had persistently brought claims that were Totally Without Merit (TWM), numbering far more than the required threshold. This conduct involved extensive lies and forgeries across multiple jurisdictions and targeted individuals who often lacked the resources to defend themselves. The judge concluded there was a "very significant risk" that Dr. Wright would continue this abusive conduct unless restrained. The court noted his consistent contempt for court rules and processes, including his perjury, forgery, breach of orders, and flawed submissions (including those using AI). A GCRO was deemed just and proportionate to protect both potential future litigants and the finite resources of the court system |
|||||||||
| Matter of Raven Investigations & Security Consulting B-423447 | GAO (USA) | 7 May 2025 | Pro Se Litigant | Unidentified | Multiple fabricated citations to prior GAO decisions | Warning | — | — | |
AI UseGAO requested clarification after identifying case citation irregularities. The protester confirmed that their representative was not a licensed attorney and had relied on a combination of public tools, AI-based platforms, and secondary summaries, which produced fabricated or misattributed citations. Hallucination DetailsExamples included:
The fabrications mirrored patterns typical of AI hallucinations. Ruling/SanctionAlthough the protest was dismissed on academic grounds, GAO addressed the citation misconduct. It did not impose sanctions in this case but warned that future submission of non-existent authority could lead to formal disciplinary action—including dismissal, cost orders, and bar referrals (in the case of attorneys). |
|||||||||
| Rotonde v. Stewart Title Insurance Co | NY SC (USA) | 6 May 2025 | Pro Se Litigant | Implied | Several non-existent legal citations | Motion to dismiss granted in full; no sanction imposed, but court formally warned plaintiff | — | — | |
AI UseThe court observed that “some of the cases that plaintiff cites… do not exist,” and noted it had “tried, in vain,” to find them. While no explicit AI use is admitted by the plaintiff, the pattern and specificity of the fabricated citations are characteristic of LLM-generated hallucinations. Ruling/SanctionThe court dismissed all five causes of action—including negligence, tortious interference, aiding and abetting fraud, declaratory judgment, and breach of implied covenant of good faith and fair dealing—as either untimely or duplicative/deficient on the merits. It declined to impose sanctions but explicitly invoked Dowlah v. Professional Staff Congress, 227 AD3d 609 (1st Dept. 2024), and Will of Samuel, 82 Misc 3d 616 (Sur. Ct. 2024), to warn plaintiff that any future citation of fictitious cases would result in sanctions. Key Judicial ReasoningJustice Jamieson noted that while the court is “sensitive to plaintiff's pro se status,” that does not excuse disregard of procedural rules or the submission of fictitious citations. The court emphasized that its prior decision in related litigation in 2022 undermined plaintiff’s tolling claims, and that Executive Order extensions during the COVID-19 pandemic did not rescue otherwise-expired claims. The hallucinated citations failed to salvage plaintiff’s fraud and tolling theories, and their use was treated as an aggravating—though not yet sanctionable—factor. |
|||||||||
| X v. Board of Trustees of Governors State University | N.D. Illinois (USA) | 6 May 2025 | Pro Se Litigant | Implied | One fabricated citation | Warning | — | — | |
|
"For that principal [sic] [X] cites a case, Gunn v. McKinney, 259 F.3d 824, 829 (7th Cir. 2001), which neither defense counsel nor the Court has been able to locate. The Court reminds [X] that Federal Rule of Civil Procedure 11 applies to pro se litigants, and sanctions may result from such conduct, especially if the citation to Gunn was not merely a typographical or citation error but instead referred to a non-existent case. By presenting a pleading, written motion, or other paper to the Court, an unrepresented party acknowledges they will be held responsible for its contents. See Fed. R. Civ. P. 11(b)." |
|||||||||
| Harris v. Take-Two Interactive Software | D. Colorado (USA) | 6 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
|
Warning | — | — | |
|
Court held that: "The use of fictitious quotes or cases in filings may subject a party, including a pro se party, to sanctions pursuant to Federal Rule of Civil Procedure 11 as “pro se litigants are subject to Rule 11 just as attorneys are.” |
|||||||||
| Wilt v. Department of the Navy | E.D. Texas (USA) | 2 May 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
|
Warning | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Lozano González v. Roberge | Housing Administrative Tribunal (Canada) | 1 May 2025 | Pro Se Litigant | ChatGPT |
False Quotes
Legal Norm
(1)
|
— | — | ||
|
The landlord sought to repossess a rental property, claiming the lease renewal was suspended based on a misinterpretation of Quebec's civil code articles. He used ChatGPT to translate these articles, which resulted in a completely different meaning. The Tribunal found the repossession request invalid as it was based on a date prior to the lease's end. The Tribunal rejected the claim of abuse, accepting the landlord's sincere belief in his misinterpretation, influenced by AI translation, and noted his language barrier and residence in Mexico. The Tribunal advised the landlord to seek reliable legal advice in the future. |
|||||||||
| Gustafson v. Amazon.com | D. Arizona (USA) | 30 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1)
|
Warning | — | — | |
| Moales v. Land Rover Cherry Hill | D. Connecticut (USA) | 30 April 2025 | Pro Se Litigant | Unidentified |
Misrepresented
Case Law
(1),
Legal Norm
(4)
|
Plaintiff warned to ensure accuracy of future submissions | — | — | |
AI UseThe court stated that “Moales may have used artificial intelligence in drafting his submissions,” citing widespread concerns over AI hallucination. It noted that several citations in his complaint and show-cause response were plainly incorrect or irrelevant. While Moales did not admit AI use, the court cited Strong v. Rushmore Loan Mgmt. Servs., 2025 WL 100904 (D. Neb.) and Mata v. Avianca to contextualize its concern. Hallucination DetailsCited Ernst & Ernst v. Hochfelder, 425 U.S. 185 (1976), and S.E.C. v. W.J. Howey Co., 328 U.S. 293 (1946) as supporting the existence of a federal common law fiduciary duty—an inaccurate legal proposition. The court characterized such misuses as “the norm rather than the exception” in Moales’s submissions. It stopped short of identifying all misused authorities but made clear that the inaccuracies were pervasive. Ruling/SanctionThe complaint was dismissed for lack of subject matter jurisdiction under Rule 12(h)(3). Moales was permitted to file an amended complaint by May 28, 2025, but was warned that future filings must be factually and legally accurate. The court declined to reach the venue issue or impose immediate sanctions but warned Moales that misrepresentation of law may violate Rule 11. Key Judicial ReasoningThe court found no basis for federal question jurisdiction and rejected Moales’s reliance on the Declaratory Judgment Act, constructive trust theories, and a nonexistent “federal common law of securities.” It also held that Moales failed to plausibly allege the amount in controversy necessary for diversity jurisdiction. |
|||||||||
| Willis v. U.S. Bank National Association as Trustee, Igloo Series Trust | N.D. Texas, Dallas Division (USA) | 28 April 2025 | Pro Se Litigant | Implied | Fabricated citation(s) | Warning | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Simpson v. Hung Long Enterprises Inc. | B.C. Civil Resolution Tribunal (Canada) | 25 April 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(4)
Misrepresented
Legal Norm
(1)
|
Other side compensated for time spent through costs order (500 CAD) | — | — | |
|
"Ms. Simpson referred to a non-existent CRT case to support a patently incorrect legal position. She also referred to three Supreme Court of Canada cases that do not exist. Her submissions go on to explain in detail what legal principles those non-existent cases stand for. Despite these deficiencies, the submissions are written in a convincingly legal tone. Simply put, they read like a lawyer wrote them even though the underlying legal analysis is often wrong. These are all common features of submissions generated by artificial intelligence." [...] "25. I agree with Hung Long that there are two extraordinary circumstances here that justify compensation for its time. The first is Ms. Simpson’s use of artificial intelligence. It takes little time to have a large language model create lengthy submissions with many case citations. It takes considerably more effort for the other party to wade through those submissions to determine which cases are real, and for those that are, whether they actually say what Ms. Simpson purported they did. Hung Long’s owner clearly struggled to understand Ms. Simpson’s submissions, and his legal research to try to understand them was an utter waste of his time. I reiterate my point above that Ms. Simpson’s submissions cited a non-existent case in support of a legal position that is the precise opposite of the existing law. This underscores the impact on Hung Long. How can a self-represented party respond to a seemingly convincing legal argument that is based on a case it is impossible to find? 26. I am mindful that Ms. Simpson is not a lawyer and that legal research is challenging. That said, she is responsible for the information she provides the CRT. I find it manifestly unfair that the burden of Ms. Simpson’s use of artificial intelligence should fall to Hung Long’s owner, who tried his best to understand submissions that were not capable of being understood. While I accept that Ms. Simpson did not knowingly provide fake cases or misleading submissions, she was reckless about their accuracy." |
|||||||||
| Nichols v. Walmart | S.D. Georgia (USA) | 23 April 2025 | Pro Se Litigant | Implied | Multiple fictitious legal citations | Case dismissed for lack of subject matter jurisdiction and as a Rule 11 sanction for bad-faith submission of fabricated legal authorities | — | — | |
AI UsePlaintiff submitted a motion to disqualify opposing counsel that cited multiple non-existent cases. She offered no clarification about how the citations were obtained or whether she had attempted to verify them. The court noted this failure and declined to excuse the misconduct, though it stopped short of attributing it directly to AI tools. Hallucination DetailsThe court reviewed Plaintiff’s motion and found that some of the cited cases did not exist. Despite being ordered to show cause, Plaintiff responded only with general statements about her good faith and complaints about perceived procedural unfairness, without addressing the origin or verification of the fake cases. Ruling/SanctionThe court dismissed the case for lack of subject matter jurisdiction and independently dismissed it as a sanction for bad-faith litigation under Rule 11. It found Plaintiff’s conduct—submitting fictitious legal authorities and refusing to take responsibility for them—warranted dismissal, even if monetary sanctions were not appropriate. The court cited Mata v. Avianca, Morgan v. Community Against Violence, and O’Brien v. Flick as relevant precedents affirming the sanctionability of hallucinated case law. Key Judicial ReasoningJudge Hall held that Plaintiff’s conduct went beyond excusable error. Her submission of fabricated cases, refusal to explain their origin, and attempts to shift blame to perceived procedural grievances demonstrated bad faith. The court concluded that dismissal—though duplicative of the jurisdictional ground—was warranted as a standalone sanction to deter future abuse by similarly situated litigants. |
|||||||||
| Brown v. Patel et al. | S.D. Texas (USA) | 22 April 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(2)
|
Warning | — | — | |
|
Although no immediate sanctions were imposed, Magistrate Judge Ho explicitly warned Plaintiff that future misconduct of this nature may violate Rule 11 and lead to consequences. |
|||||||||
| Rowe v National Australia Bank Ltd | South Australia (Australia) | 17 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
— | — | ||
| Ferris v. Amazon.com Services | N.D. Mississippi (USA) | 16 April 2025 | Pro Se Litigant | ChatGPT | 7 fictitious cases | Plaintiff ordered to pay Defendant’s reasonable costs related to addressing the fabricated citations | — | — | |
AI UseMr. Ferris admitted at the April 8, 2025 hearing that he used ChatGPT to generate the legal content of his filings and even the statement he read aloud in court. The filings included at least seven entirely fictitious case citations. The court noted the imbalance: it takes a click to generate AI content but substantial time and labor for courts and opposing counsel to uncover the fabrications. Hallucination DetailsThe hallucinated cases included federal circuit and district court decisions, complete with plausible citations and jurisdictional diversity, crafted to lend credibility to Plaintiff’s intellectual property and employment-related claims. These false authorities were submitted both in the complaint and in opposition to Amazon’s motion to dismiss. Ruling/SanctionThe court found a Rule 11 violation and, while initially inclined to dismiss the case outright, chose instead to impose a compensatory monetary sanction. Amazon is entitled to submit a detailed affidavit of costs directly attributable to rebutting the false citations. The final monetary amount will be set in a subsequent order. Key Judicial ReasoningJudge Michael P. Mills condemned the misuse of generative AI as a serious threat to judicial integrity. Quoting Kafka (“The lie made into the rule of the world”), the court lamented the rise of “a post-truth world” and framed Ferris as an “avatar” of that dynamic. Nevertheless, it opted for the least severe sanction consistent with deterrence and fairness: compensatory costs under Rule 11. |
|||||||||
| Sims v. Souily-Lefave | D. Nevada (USA) | 15 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Warning | — | — | |
| Graciela Dela Torre v. Davies Life & Health, Inc., et al. | N.D. Illinois (USA) | 11 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(5),
Legal Norm
(2)
|
— | — | ||
| Bischoff v. South Carolina Department of Education | Admin Law Court, S.C. (USA) | 10 April 2025 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
|
The court held that: "It is likely that Appellant employed argument generated by an artificial intelligence (AI) program which contained the fictitious case citation and cautions Appellant that many harms flow from the use of non-existent case citations and fake legal authority generated by AI programs, including but not limited to the waste of judicial resources and time and waste of resources and time of the opposing party. Were courts to unknowingly rely upon fictitious citations, citizens and future litigants might question the validity of court decisions and the reputation of judges. If, alternatively, Appellant's use of a fictitious case was not the result of using an AI program, but was instead a conscious act of the Appellant, Appellant's action could be deemed a fraud on the Court. Appellant is hereby expressly warned that submission of fictitious case authorities may subject Appellant to sanctions under the S.C. Frivolous Proceedings Act, S.C. Code Ann. § 15-36-10(Supp. 2024)." |
|||||||||
| Daniel Jaiyong An v. Archblock, Inc. | Delaware Chancery (USA) | 3 April 2025 | Pro Se Litigant | Implied |
False Quotes
Case Law
(2)
Misrepresented
Case Law
(2)
|
Motion denied with prejudice; no immediate sanction imposed, but petitioner formally warned and subject to future certification and sanctions | — | — | |
AI UseThe petitioner submitted a motion to compel discovery that contained several fabricated or misleading citations. The court explicitly stated that the motion bore hallmarks of generative AI use and referenced ChatGPT’s known risk of “hallucinations.” Although the petitioner did not admit AI use, the court found the origin clear and required future filings to include a GenAI usage certification. Hallucination DetailsExamples included:
Court verified via Westlaw that some phrases returned only the petitioner’s motion as a result. Ruling/SanctionMotion to compel denied with prejudice. No immediate monetary sanction imposed, but petitioner was warned that further submission of fabricated authority may result in sanctions including monetary penalties or dismissal. Future filings must include a certification regarding the use of generative AI. Key Judicial ReasoningThe Vice Chancellor emphasized that GenAI can benefit courts and litigants, but careless use that results in fictitious legal authorities wastes resources and harms judicial integrity. |
|||||||||
| Zzaman v. HMRC | (UK) | 3 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(7),
Legal Norm
(2)
|
Warning | — | — | |
|
Plaintiff had disclosed the use of AI in preparing his statement of case. The court noted: "29. However, our conclusion was that Mr Zzaman’s statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman’s statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC)). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully “understand” what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal). There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court’s time and that of opposing parties." |
|||||||||
| Bangholme Investments Pty Ltd v Greater Dandenong CC | Victorian CAT (Australia) | 3 April 2025 | Pro Se Litigant | Unidentified |
Misrepresented
Legal Norm
(1)
|
— | — | ||
|
Alan Hood relied on an AI search that inferred the Council was required to notify objectors. The Tribunal found that inference 'plainly incorrect', noting Hood had received the requisite notice and should have read the documents; the Tribunal nevertheless exercised discretion to join him. |
|||||||||
| Boggess v. Chamness | E.D. Texas (USA) | 1 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Argument ignored | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Sanders v. United States | Fed. claims court (USA) | 31 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1),
Legal Norm
(1)
|
Warning | — | — | |
AI UseThe plaintiff did not admit to using AI, but the court inferred likely use due to the submission of fabricated citations matching the structure and behavior typical of generative AI hallucinations. The decision referenced public concerns about AI misuse and cited specific examples of federal cases where similar misconduct occurred. Hallucination DetailsPlaintiff cited:
Ruling/SanctionThe court granted the government’s motion to dismiss for lack of subject matter jurisdiction under Rule 12(b)(1). Although the court found a clear Rule 11 violation, it opted not to sanction the plaintiff, citing the evolving context of AI use and the absence of bad faith. A formal warning was issued, with notice that future hallucinated filings may trigger sanctions. Key Judicial ReasoningJudge Roumel noted that plaintiff’s attempt to rely on fictional case law was a misuse of judicial resources and a disservice to her own advocacy. The court cited multiple precedents addressing hallucinated citations and AI misuse, stating clearly that while leeway is granted to pro se litigants, the line is crossed when filings rely on fictitious law. |
|||||||||
| McKeown v. Paycom Payroll LLC | W.D. Oklahoma (USA) | 31 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Submission stricken out, and warning | — | — | |
AI UseAlthough AI was not named and Plaintiff denied intentional fabrication, the court considered the citation (Adamov, 779 F.3d 851, 860 (8th Cir. 2015)) to be plainly fictitious. It noted the possibility that Plaintiff used generative AI tools, given the fabricated citation's plausible-sounding structure and mismatch with existing precedent. Hallucination DetailsPlaintiff submitted fabricated legal authorities in at least two filings, despite being explicitly warned by the court after the first incident. The false case cited in her sur-reply could not be located in any legal database. When asked to produce it, she responded that she had likely “garbled” the citation but provided no plausible alternative or correction. Ruling/SanctionThe court declined to dismiss the action as a sanction, citing the limitations pro se litigants face in accessing reliable legal research tools. However, it granted the defendant’s motion to strike Plaintiff’s two unauthorized sur-replies and formally warned her that further violations of Rule 11 would lead to sanctions, including monetary penalties, filing restrictions, or dismissal. |
|||||||||
| SQBox Solutions Ltd. v. Oak | BC Civil Resolution Tribunal (Canada) | 31 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
False Quotes
Legal Norm
(2)
Misrepresented
Case Law
(4)
|
Litigant lost on merits | — | — | |
|
"By relying on inaccurate and false AI submissions, Mr. Oak hurts his own case. I understand that Mr. Oak himself might not be aware that the submissions are misleading, but they are his submissions and he is responsible for them. " |
|||||||||
|
Source: Steve Finlay
|
|||||||||
| AQ v. BT | CRT (Canada) | 28 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2),
Legal Norm
(1)
Misrepresented
Case Law
(1),
Legal Norm
(1)
|
Arguments ignored | — | — | |
| LYJ v. Occupational Therapy Board of Australia | Queensland (Australia) | 26 March 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(1)
|
No sanction; Fabrication noted; Warning issued regarding AI use | — | — | |
AI UseThe applicant cited Crime and Misconduct Commission v Chapman [2007] QCA 283 in support of a key submission. The Tribunal was unable to locate such a case. It queried ChatGPT, which returned a detailed but entirely fictitious account of a case that does not exist. The Tribunal attached Queensland’s AI usage guidelines to its reasons and emphasized that the responsibility for accuracy lies with the party submitting the material. Ruling/SanctionThe fabricated case was disregarded. The Tribunal granted a stay but issued a strong warning: litigants are responsible for understanding the limitations of AI tools and must verify all submitted material. The inclusion of fictitious material wastes time, diminishes credibility, and undermines the process. Key Judicial ReasoningCiting non-existent authorities "weakens their arguments. It raises issues about whether their submission can be considered as accurate and reliable. It may cause the Tribunal to be less trusting of other submissions which they make. It wastes the time for Tribunal members in checking and addressing these hallucinations. It causes a significant waste of public resources." |
|||||||||
| Kruglyak v. Home Depot U.S.A., Inc. | W.D. Virginia (USA) | 25 March 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
No monetary sanctions; Warning | — | — | |
AI UseKruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources. Hallucination DetailsThe plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones. Ruling/SanctionMagistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur. Key Judicial ReasoningThe court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action. |
|||||||||
| Buckner v. Hilton Global | W.D. Kentucky (USA) | 21 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(1)
|
Warning | — | — | |
|
In a subsequent Order, the court pointed out that "This Court's opinion pointing out Buckner's citation to nonexistent case law, along with its implications, is an issue for appeal and not a valid basis for recusal. " |
|||||||||
| Williams v. Capital One Bank | D. DC (USA) | 18 March 2025 | Pro Se Litigant | CoCounsel |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Case dismissed with prejudice for failure to state a claim. No monetary sanction imposed, but the court issued a formal warning | — | — | |
AI UseWhile not formally admitted, Plaintiff’s opposition brief referred to “legal generative AI program CoCounsel,” and the court noted that the structure and citation pattern were consistent with AI-generated output. Capital One was unable to verify several case citations, prompting the court to scrutinize the submission. Hallucination DetailsAt least one case was fully fabricated, and another was a real case misattributed to the wrong jurisdiction and reporter. The court emphasized that it could not determine whether the mis-citations were the result of confusion, poor research, or hallucinated AI output—but the burden rested with the party filing them. Ruling/SanctionThe court dismissed the complaint with prejudice, noting Plaintiff had already filed and withdrawn a prior version and had had full opportunity to amend. Though it did not impose monetary sanctions, it issued a strong warning and directed Plaintiff to notify other courts where he had similar pending cases if any filings included erroneous AI-generated citations. |
|||||||||
| Stevens v. BJC Health System | Missouri CA (USA) | 18 March 2025 | Pro Se Litigant | Implied | 6 fabricated citations | Warning | — | — | |
| Alkuda v. McDonald Hopkins Co., L.P.A. | N.D. Ohio (USA) | 18 March 2025 | Pro Se Litigant | Implied | Fake Citations | Warning | — | — | |
| LMN v. STC (No. 2) | (New Zealand) | 17 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Warning | — | — | |
| Reddan & An Bord Pleanála v. Trustees of Nenagh Golf Club | (Ireland) | 13 March 2025 | Pro Se Litigant | Unidentified |
Fabricated
Exhibits or Submissions
(1),
other
(1)
Misrepresented
Legal Norm
(4),
other
(1)
|
Application for Judicial Review Denied; Express Judicial Rebuke for Misuse of AI | — | — | |
AI UseJustice Nolan suspected that Reddan's submissions, especially references to "subornation to perjury" and Constitutional Article 40 rights, were AI-generated, exhibiting typical hallucination patterns (pseudo-legal concepts, inappropriate cut-and-paste fragments). Reddan did not admit using AI but relied on internet-sourced legal arguments that closely resembled LLM-style outputs. Hallucination DetailsInappropriate invocation of "subornation to perjury," a term foreign to Irish law. Constitutional and criminal law citations (Article 40, Non-Fatal Offences Against the Person Act) irrelevant to judicial review context. Assertions framed in hyperbolic, sensationalist terms without factual or legal basis. General incoherence of pleadings, consistent with AI-generated pseudo-legal text Ruling/SanctionThe High Court refused leave to apply for judicial review on all nine grounds. While no formal financial sanction was imposed, Justice Nolan issued a sharp rebuke, highlighting the improper use of AI and warning against making scurrilous, unverified allegations in legal pleadings. The Court stressed that misuse of AI-generated material could itself amount to an abuse of the judicial process. |
|||||||||
| Mark Lillard v. Offit Kurman, P.A. | SC Delaware (USA) | 12 March 2025 | Pro Se Litigant | Unidentified |
False Quotes
Case Law
(2)
Misrepresented
Case Law
(2)
|
AI-use certification required for future filings | — | — | |
| Arnaoudoff v. Tivity Health Incorporated | D. Arizona (USA) | 11 March 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(3)
Misrepresented
Case Law
(1)
|
Court ignored fake citations and granted motion to correct the record | — | — | |
| Sheets v. Presseller | M.D. Florida (USA) | 11 March 2025 | Pro Se Litigant | Implied | Allegations by the other party that brief was AI-generated | Warning | — | — | |
| 210S LLC v. Di Wu | Hawaii (USA) | 11 March 2025 | Pro Se Litigant | Implied | Fictitious citation and misrepresentation | Warning | — | — | |
| Yu Hon Tong Thomas v Centaline Property Agency | High Court (Hong Kong) | 26 February 2025 | Pro Se Litigant | Unidentified |
Fabricated
Exhibits or Submissions
(1)
Misrepresented
Case Law
(1)
|
— | — | ||
| Merz v. Kalama | W.D. Washington (USA) | 25 February 2025 | Pro Se Litigant | Unidentified |
Misrepresented
Legal Norm
(2)
|
— | — | ||
| Saxena v. Martinez-Hernandez et al. | D. Nev. (USA) | 18 February 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
False Quotes
Case Law
(1)
|
Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor | — | — | |
AI UseThe plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them. Hallucination Details
The court found no plausible explanation for these citations other than AI generation or outright fabrication. Ruling/SanctionThe court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith. Key Judicial ReasoningCiting Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief". |
|||||||||
| Geismayr v. The Owners, Strata Plan KAS 1970 | Civil Resolution Tribunal (Canada) | 14 February 2025 | Pro Se Litigant | Copilot |
Fabricated
Case Law
(9)
Misrepresented
Case Law
(1)
|
Citations ignored | — | — | |
| Goodchild v State of Queensland | Queensland IRC (Australia) | 13 February 2025 | Pro Se Litigant | "Internet searches" |
Fabricated
Case Law
(5)
|
Relevant submissions ignored | — | — | |
|
"The Commission accepts the Applicant's explanation. Given that there appears to be significant doubt over whether the authorities cited by the Applicant represent actual decisions from the Fair Work Commission, I will give the authorities cited by the Applicant no weight in determining whether she has provided an explanation for the delay. This appears to be a salutary lesson for litigants in the dangers of relying on general search engines on the internet or artificial intelligence when preparing legal documents." |
|||||||||