This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (831 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Page v. Long | Melbourne County Court (Australia) | 27 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(2)
|
Litigant lost on merits | — | — | |
|
"Generative AI can be beguiling, particularly when the task of representing yourself seems overwhelming. However, a litigant runs the risk that their case will be damaged, rather than helped, if they choose to use AI without taking the time to understand what it produces, and to confirm that it is both legally and factually accurate. " |
|||||||||
| Parra v. United States | Court of Federal Claims (USA) | 27 June 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
|
Warning | — | — | |
|
Plaintiff Ravel Ferrera Parra, proceeding pro se, filed a lawsuit against the United States alleging financial harm due to misconduct by various judicial and governmental entities. The court dismissed the case for lack of jurisdiction, as the claims were not within the court's purview. The court noted that Plaintiff's filings appeared to be assisted by AI, as evidenced by the rapid filing of responses tell-tale language ("Would you like additional affidavits, supporting exhibits, or further refinements before submission?"), the inclusion of fabricated case citations. "While Plaintiff’s use of AI, by itself, does not violate this Court’s Rules, Plaintiff’s citation to fake cases does." The court further pointed out that: "“It is no secret that generative AI programs are known to ‘hallucinate’ nonexistent cases.” Sanders, 176 Fed. Cl. at 169 (citation omitted). That appears to have happened here. When searching the Federal Claims Reporter for “Tucker v. United States, 71 Fed. Cl. 326 (2006),” Plaintiff’s citation brings the Court to the third page of Grapevine Imports, Ltd. v. United States, 71 Fed. Cl. 324, 326 (2006), a real tax case from this Court. Similarly, the AI used by Plaintiff in Sanders v. United States, 176 Fed. Cl. 163, 169 (2025) also made up a citation to a case called Tucker v. United States. Perhaps both AI programs hallucinated this case name based on the Tucker Act, this Court’s jurisdictional statute. Regardless, here, as in Sanders, the citation to a case called Tucker v. United States does not exist." The court warned Plaintiff about the risks of using AI-generated content without verification but did not impose sanctions. |
|||||||||
| Sister City Logistics, Inc. v. John Fitzgerald | United States District Court for the Southern District of Georgia (USA) | 27 June 2025 | Pro Se Litigant |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
|
Warning | — | — | ||
|
The court observed that the original motion to remand filed pro se by the plaintiff contained non-existent case law and falsified quotations. Although the court did not impose sanctions in this instance, it warned that future use of fake legal authority would result in a show cause order, including against the Counsel who later joined the case. |
|||||||||
| Assessment and Training Solutions Consulting B-423398 | GAO (USA) | 27 June 2025 | Pro Se Litigant | Implied | Fabricated citations, misrepresented precedents | Warning | — | — | |
| Enviro Plus Duct Cleaning v. Department of Public Works | Canadian ITT (Canada) | 26 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Legal Norm
(1)
|
— | — | ||
|
Source: Courtready
|
|||||||||
| Schoene v. Oregon Department of Human Services | United States District Court for the District of Oregon (USA) | 25 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(5)
|
Warning | — | — | |
|
"Before addressing the merits of Schoene’s motion, the Court notes that Schoene cited several cases in her reply brief to support her motion to amend, including Butler v. Oregon, 218 Or. App. 114 (2008), Curry v. Actavis, Inc., 2017 LEXIS 139126 (D. Or. Aug. 30, 2017), Estate of Riddell v. City of Portland, 194 Or. App. 227 (2004), Hampton v. City of Oregon City, 251 Or. App. 206 (2012), and State v. Burris, 107 Or. App. 542 (1991). These cases, however, do not exist. Schoene’s false citations appear to be hallmarks of an artificial intelligence (“AI”) tool, such as ChatGPT. It is now well known that AI tools “hallucinate” fake cases. See Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. Ct. App. 2024) (noting, in February 2024, that the issue of fictitious cases being submitted to courts had gained “national attention”).6 In addition, the Court notes that a basic internet search seeking guidance on whether it is advisable to use AI tools to conduct legal research or draft legal briefs will explain that any legal authorities or legal analysis generated by AI needs to be verified. The Court cautions Schoene that she must verify the accuracy of any future citations she may include in briefing before this Court and other courts" |
|||||||||
| Romero v. Goldman Sachs Bank USA | S.D.N.Y. (USA) | 25 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
Warning | — | — | |
| Malone & Anor v Laois County Council & Ors | High Court (Ireland) | 23 June 2025 | Pro Se Litigant | Implied |
Fabricated
Legal Norm
(1)
False Quotes
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1),
Legal Norm
(1)
|
Warning | — | — | |
|
Referring to Ayinde, the judge held that "The principle is essentially the same - though I hasten to say that I would not push the analogy too far as to a factual comparison of the present case with that case and the error in the present case is not of the order of the misconduct in that case. However, appreciable judicial time was wasted on the issue - not least trying to find the source of the quotation. And it does illustrate:
43. All that said, in a substantive sense, the issue is not vital to this case. The underlying proposition for which Mr Malone contends - that domestic courts must implement EU law - is uncontroversial. Not least for that reason, and in light also of the manner in which Mr Malone generally presented his case at the hearing, I am inclined to accept that there was no attempt or intention to mislead and accept also that Mr Malone has apologized for the error. It does not affect the outcome of the present motions." |
|||||||||
| Bottrill v Graham & Anor (No 2) | District Court of New South Wales (Australia) | 20 June 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
The second defendant's Notice of Motion for summary dismissal of the plaintiff’s claim was dismissed, with costs reserved to the trial judge. | — | — | |
|
"When the parties came before the court on 22 May 2025, there had been little time for the plaintiff, the first defendant and the court to examine the second defendant’s written submissions served late on the night before. It was nevertheless immediately apparent that the second defendant sought to rely upon authority and court rules which were not merely misstated but, in some circumstances, imaginary. I am satisfied that all of the judgments and rules referred to in the submissions of 21 May 2025 were misstated, non-existent, or both, and that Gen AI had been used to prepare these submissions. An example was the citation of a decision of the Supreme Court of New South Wales described as “Wu v Wilks” (I will not provide the citation given in full, as there is a risk of it being picked up as genuine by other Gen AI: Luck v Secretary, Services Australia [2025] FCAFC 26 at [14]). There is no decision with this name, either in the Supreme Court of New South Wales or in any other jurisdictions. The caselaw citation given for “Wu v Wilks” belonged to a judgment on wholly unrelated material and the principles of law for which it was cited. All of the citations suffered similar problems. I drew these issues to the attention of the second defendant and enquired whether she had used Gen AI in the preparation of her submissions and, if so, whether she was aware of the Practice Note. She acknowledged that she had done so but said this was because she had very little time to provide submissions in reply and was deeply distressed by these proceedings" |
|||||||||
| Pro Health Solutions Ltd v ProHealth Inc | Intellectual Property Office (UK) | 20 June 2025 | Pro Se Litigant, Lawyer | ChatGPT |
False Quotes
Case Law
(3)
Misrepresented
Case Law
(6),
Doctrinal Work
(2)
|
Warning; No costs awarded for the appeal since both sides seemingly erred | — | — | |
|
Claimant used Chat GPT to assist in drafting his grounds of appeal and skeleton argument. The documents included fabricated citations and misrepresented case summaries. Claimant admitted to using Chat GPT and apologized for the errors. Compounding matters, the court suspected that the respondent had also used AI, since the cases cited in the Counsel's skeleton, though extant, did not support any of the propositions made - and Counsel was unable to explain how they got there. |
|||||||||
| Reilly v. Conn. Interlocal Risk Mgmt. Agency | D. Connecticut (USA) | 20 June 2025 | Pro Se Litigant | Implied |
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
Warning | — | — | |
|
"Artificial intelligence may ultimately prove a helpful tool to assist pro se litigants in bringing meritorious cases to the courts. In that way, artificial intelligence has the potential to contribute to the cause of justice. However, accessing any beneficial use of artificial intelligence requires carefully understanding its limitations. For example, if merely asked to write an opposition to an opposing party’s motion or brief, or to respond to a court order, an artificial intelligence program is likely to generate such a response, regardless of whether the response actually has an arguable basis in the law. Where the court or opposing party was correct on the law, the program will very likely generate a response or brief that includes a false statement of the law. And because artificial intelligence synthesizes many sources with varying degrees of trustworthiness, reliance on artificial intelligence without independent verification renders litigants unable to represent to the Court that the information in their filings is truthful." |
|||||||||
| J.R.V. v. N.L.V. | SC British Columbia (Canada) | 19 June 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
Costs to the claimant in the amount of $200. | 200 CAD | — | |
|
In the case of J.R.V. v. N.L.V., the respondent, appearing in person, used a generative AI tool to prepare parts of her written argument. This resulted in the inclusion of citations to non-existent cases, known as 'hallucinations.' The claimant sought costs due to the need to research and respond to these false citations. The court acknowledged the issue but noted that the respondent was not represented by counsel and was unaware of the AI's capability to generate false citations. Moreover, the claimant was wrong as to the alleged non-existence of some citations. The court ordered the respondent to pay $200 in costs to the claimant. |
|||||||||
| In re Marriage of Isom and Kareem | Illinois CA (USA) | 16 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
The appeal was denied, and the trial court's decision was affirmed. | — | — | |
| Attorney General v. $32,000 in Canadian Currency | Ontario SCJ (Canada) | 16 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Warning | — | — | |
|
"[49] Mr. Ohenhen submitted a statement of legal argument to the court in support of his arguments. In those documents, he referred to at least two non-existent or fake precedent court cases, one ostensibly from the Court of Appeal for Ontario and another ostensibly from the British Columbia Court of Appeal. In reviewing his materials after argument, I tried to access these cases and was unable to find them. I asked the parties to provide them to me. [50] Mr. Ohenhen responded with a “clarification”, providing different citations to different cases. I asked for an explanation as to where the original citations came from, and specifically, whether they were generated by artificial intelligence. I have received no response to that query. [51] While Mr. Ohenhen is not a lawyer with articulated professional responsibilities to the court, every person who submits authorities to the court has an obligation to ensure that those authorities exist. Simple CanLII searches would have revealed to Mr. Ohenhen that these were fictitious citations. Putting fictitious citations before the court misleads the court. It is unacceptable. Whether the cases are put forward by a lawyer or self-represented party, the adverse effect on the administration of justice is the same. [52] Mr. Ohenhen’s failure to provide a direct and forthright answer to the court’s questions is equally concerning. [53] Court processes are not voluntary suggestions, to be complied with if convenient or helpful to one’s case. The proper administration of justice requires parties to respect the rules and proceed in a forthright manner. That has not happened here. [54] I have not attached any consequences to this conduct in this case. However, should such conduct be repeated in any court proceedings, Mr. Ohenhen should expect consequences. Other self-represented litigants should be aware that serious consequences from such conduct may well flow." |
|||||||||
| Reed v. Community Health Care | W.D. Washington (USA) | 10 June 2025 | Pro Se Litigant | Implied | Fabricated citations, false quotes | Warning | — | — | |
|
" Plaintiffs identify fictitious quotes and citations in their briefing to support their arguments. For example, Plaintiffs purport to quote language from S.H. Hold v. United States, 853 F.3d 1056, 1064 (9th Cir. 2017) and Green v. United States, 630 F.3d 1245, 1249 (9th Cir. 2011). (Reed 2, Dkt. No. 23 at 6.) However, the quoted language is nowhere found in those cases. Plaintiffs also cite to a case identified as “Horne v. Potter, 557 F.3d 953, 957 (9th Cir. 2009).” (Id. at 7.) That citation, however, is for a case titled Bell v. The Hershey Company. Plaintiffs appear to acknowledge they offered fictitious citations. (See Reed 2, Dkt. No. 27.) Plaintiffs are cautioned that providing fictitious cases and quotes will lead to sanctions. " |
|||||||||
| Zahariev v. Zaharieva | Supreme Court of British Columbia (Canada) | 9 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(3)
Misrepresented
Case Law
(1)
|
— | — | ||
| Noura Ahmed v Troy Powell, Peggy Pulliam, Jese Stovka | Ontario LRB (Canada) | 9 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
— | — | ||
|
Source: Courtready
|
|||||||||
| Goins v. Father Flanagan's Boys Home | D. Nebraska (USA) | 5 June 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresented authorities | Warning | — | — | |
|
" This Court's local rules permit the use of generative artificial intelligence programs, but all parties, including pro se parties, must certify “that to the extent such a program was used, a human signatory of the document verified the accuracy of all generated text, including all citations and legal authority,” NECivR 7.1(d)(4)(B). The plaintiff's brief contains no such certification, nor does the plaintiff deny using artificial intelligence. See filing 27 at 9.
|
|||||||||
| Lakhanpal v. Avis Budget Group Inc. | HRT Ontario (Canada) | 4 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(3)
|
— | — | ||
| Powhatan County School Board v. Skinger et al | E.D. Virginia (USA) | 2 June 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(37)
Misrepresented
Case Law
(6)
|
Relevant motions stricken | — | — | |
|
"The pervasive misrepresentations of the law in Lucas' filings cannot be tolerated. It serves to make a mockery of the judicial process. It causes an enormous waste of judicial resources to try to find cited cases that do not exist and to determine whether a cited authority is relevant or binding, only to determine that most are neither. In like fashion, Lucas' adversaries also must run to ground the nonexistent cases or address patently irrelevant ones. The adversaries must thus incur needless legal fees and expenses caused by Lucas' pervasive citations to nonexistent or irrelevant cases. [...] However, as previously noted Lucas appears to be judgment proof so monetary sanctions likely will not deter her from the abusive practices reflected in her filings and in her previously announced, consistently followed, abuse of the litigation proceedings created by the Individuals with Disabilities Education Act, 20 U.S.C. § 1400, et seq. (“IDEA”). So, the Court must find some other way to protect the interests of justice and to deter Lucas from the abuses which have come to mark her approach to participation as a defendant in the judicial process. In this case, the most appropriate remedy is to strike Lucas' filings where they are burdensome by virtue of volume and exceed permitted page limits, where they are not cogent or understandable (when given the generous latitude afforded pro se litigants), and where they misrepresent the law by citing nonexistent or utterly irrelevant cases." In a subsequent Opinion, the court declined to reconsider or review its findings, pointing out that: "To begin, it is unclear what Lucas means by "contested" citations. The citations that the Court found to not exist are not "contested." They simply do not exist. There is no contesting that fact because the Court checked each citation that was referenced in its MEMORANDUM OPINION, exactly as Lucas cited them (and through other research means), and could not find any citation that matched what Lucas cited. That research demonstrates that the Court's findings are, in fact, supported rather than "[u]nsupported." Id. Then, in no way did the Court "wrongly assume[]" that these citations to nonexistent legal authority were "'fabricated' due to the use of generative AI." Id. The Court meticulously checked every citation that it held did not exist in those decisions. Those decisions were not based on "assumptions" but, instead, on the fact that either (1) no case existed under the reporter citation, case name, or quotation that Lucas used, or (2) a case with the reporter citation did exist but was to an entirely different case than the one cited by Lucas and had no relevancy to the issues of this case. ECF No. 170, at 520. And, there was no incorrect assumption that those nonexistent legal authorities were generated, hallucinated, or fabricated by AI because Lucas admitted, on the record, to using AI when writing her filings with the Court. The fact that her citations to nonexistent legal authority are so pervasive, in volume and in location throughout her filings, can lead to only one plausible conclusion: that an AI program hallucinated them in an effort to meet whatever Lucas' desired outcome was based on the prompt that she put into the AI program. As the Court described in its MEMORANDUM OPINION, this is becoming an alarmingly prevalent occurrence common to AI programs. Id. at 23-26. It is exceedingly clear that it occurred here. [...] The MOTION also complains that the Court did not give Lucas an "opportunity to verify or correct citations." Id. Wholly apart from the fact that it is the litigant's (pro se or represented) burden to verify citations, there is no reason to have accorded Lucas the opportunity to verify because the problem was extensive and pervasive across at least six filings. Moreover, the Court actually did what should have been done before the MOTION was filed by determining that those citations do not exist. No further verification is necessary. And, after a diligent search, if the Court could not find the legal authorities that Lucas purported to rely upon and present as real and binding, it is a folly to believe that Lucas' efforts at "correction" would have returned anything different. Further, she could have taken the opportunity in this MOTION to go through—citation by citation—and "verify" or "correct" them to demonstrate to the Court that its findings were, in fact, incorrect, rather than just baldly and without evidence claiming them to be so. She did not do that." |
|||||||||
| Ivins v KMA Consulting Engineers & Ors | Queensland IRC (Australia) | 2 June 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
|
Relevant Submissions ignored | — | — | |
|
The Complainant, who was self-represented, used artificial intelligence to assist in preparing her submissions, which included fictitious case citations. The Commission noted the potential seriousness of relying on fabricated citations but did not impose any sanctions on the Complainant, as she was self-represented. The judge held: "In relation to the issue of the Complainant's reference to what appears to be fictitious case authorities, this is potentially a serious matter because it can be viewed as an attempt to mislead the Commission. If an admitted legal practitioner were to do this, there would be grounds to refer the practitioner to the Legal Services Commission for an allegation of misconduct. [...] Given that the Complainant is self-represented, I intend to take the same approach that I adopted in Goodchild and simply afford that part of the Complainant's submissions that deals with the two authorities no weight in determining the two applications. " |
|||||||||
| Ploni v. Wasserman et al. | Small Claims Court (Israel) | 1 June 2025 | Pro Se Litigant | ChatGPT; Google Search | Two Fabricated Citations | Monetary Fine | 250 ILS | — | |
|
" Directing the Court to nonexistent authorities wastes the Court’s time, a resource meant to serve the public, not be monopolized by a single litigant making baseless arguments. " |
|||||||||
| Andersen v. Olympus as Daybreak | D. Utah (USA) | 30 May 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresentation of past cases | Warning | — | — | |
|
In an earlier decision, the court had already warned the plaintiff against "any further legal misrepresentations in future communications". |
|||||||||
| GNX v. Children's Guardian | NSW (Australia) | 27 May 2025 | Pro Se Litigant | ChatGPT | One misrepresented precedent | Warning for continuation of proceedings | — | — | |
|
After the Applicant confessed having relied on ChatGPT for the written phase of the proceedings, hearing was adjourned and the Court "cautioned the Applicant on relying on ChatGPT for legal advice and suggested that the Applicant may wish to seek legal advice from a lawyer about his application." The court ultimately found that one of the authorities illustrated "the risk in relying on ChatGPT and Generative AI for legal advice. The Applicant’s description of the decision in this case in his submissions, for which he used ChatGPT to prepare, is clearly wrong. " |
|||||||||
| Brick v. Gallatin County | D. Montana (USA) | 27 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
| Shakori v Tern, 2025 ONLTB 31233 | LTB (Ontario) (Canada) | 26 May 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(1)
|
— | — | ||
|
Source: Courtready
|
|||||||||
| Vechtel et al. v. Gershoni | Israel (Israel) | 25 May 2025 | Pro Se Litigant | Implied |
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
Motion denied | — | — | |
|
The judge pointed out that the plaintiffs' written request included what appeared to be a direct quote from one of her own previous judgments. However, upon examination, she found that not only did this quote not exist in the cited judgment, but the judgment itself did not even address the legal question at stake. |
|||||||||
| Luther v. Oklahoma DHS | W.D. Oklahoma (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
|
" The Court has serious reason to believe that Plaintiff used artificial intelligence tools to assist in drafting her objection. While the use of such tools is not prohibited, artificial intelligence often cites to legal authorities, like Cabrera, that do not exist. Continuing to cite to non-existent cases will result in sanctions up to and including dismissal. " |
|||||||||
| Rotonde v. Stewart Title Insurance Company | New York (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
| Nikolic & Anor v Nationwide News Pty Ltd & Anor | SC Victoria (Australia) | 23 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(3)
|
— | — | ||
| Zherka v. Davey et al. | Massachusetts (USA) | 22 May 2025 | Pro Se Litigant | Unidentified | Fabricated citations | Motion struck, with leave to refile | — | — | |
|
After being ordered to show cause, plaintiff admitted having used AI for several filings. |
|||||||||
| Evans et al v. Robertson et al (1) | E.D. Michigan (USA) | 21 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Warning | — | — | |
| Bauche v. Commissioner of Internal Revenue | US Tax Court (USA) | 20 May 2025 | Pro Se Litigant | Implied | Nonexistent cases | Warning | — | — | |
|
" While in our discretion we will not impose sanctions on petitioner, who is proceeding pro se, we warn petitioner that continuing to cite nonexistent caselaw could result in the imposition of sanctions in the future. " |
|||||||||
| Gjovik v. Apple Inc. | N.D. California (USA) | 19 May 2025 | Pro Se Litigant | Unidentified | Fabricated citation(s) | No sanctions imposed, but warning issued | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Ehrlich v. Israel National Academy of Sciences et al. | Israel (Israel) | 18 May 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
Request dismissed on the merits, monetary sanction | 500 ILS | — | |
|
Applicant sought an administrative court order to force the Israel National Academy of Sciences to let him speak at a conference on "Artificial Intelligence and Research: Uses, Prospects, Dangers". This was dismissed, with the court adding: " I will add this: As mentioned above, the subject of the conference where the applicant wishes to speak concerns, among other things, the dangers of artificial intelligence. Indeed, one of these dangers materialized in the applicant's request: He, who is not represented, stated clearly and fairly that he used artificial intelligence for his request. An examination of the request shows that it consequently suffered from 'AI hallucinations' – it mentioned many "judgments" that never came into existence (Regarding this problem, see: HCJ 38379-12-24 Anonymous v. The Sharia Court of Appeals Jerusalem, paragraphs 13-12 (23.2.2025) (hereinafter: the Anonymous matter); HCJ 23602-01-25 The Association for the Advancement of Dog Rights v. The Minister of Agriculture, paragraphs 12-11 (28.2.2025) (hereinafter: the Association matter); and regarding the mentioned problem and the possibility of participating in the conference, see: Babylonian Talmud, Gittin 43a). Just recently, this Court warned, in no uncertain terms, that alongside the blessings of artificial intelligence, one must take excellent care against its pitfalls; 'Eat its inside, throw away its peel' (Anonymous matter, paragraph 26; Association matter, paragraph 19). The applicant did state, clearly, that he used artificial intelligence, and in light of this, he further requested that if a 'technical' error occurred under his hand – it should be seen as a good-faith mistake, not to be held against him. I cannot accept such a request. It does not cure the problems of hallucinating artificial intelligence. Those addressing this Court, whether represented or unrepresented alike, bear the burden of examining whether the precedents they refer to - which are not a 'technical' matter, but rather the beating heart of the pleadings - indeed exist, and substantiate their claims. For this reason too - the request must be dismissed" (Translation by Gemini 2.5). |
|||||||||
| Beenshoof v. Chin | W.D. Washington (USA) | 15 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
No sanction imposed; court reminded Plaintiff of Rule 11 obligations | — | — | |
AI UseThe plaintiff, proceeding pro se, cited “Darling v. Linde, Inc., No. 21-cv-01258, 2023 WL 2320117 (D. Or. Feb. 28, 2023)” in briefing. The court stated it could not locate the case in any major legal database or via internet search and noted this could trigger Rule 11 sanctions if not based on a reasonable inquiry. The ruling cited Saxena v. Martinez-Hernandez as a cautionary example involving AI hallucinations, suggesting the court suspected similar conduct here. |
|||||||||
| Bandla v. Solicitors Regulation Authority | UK (UK) | 13 May 2025 | Pro Se Litigant | Google Search (Allegedly) |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1),
Legal Norm
(2)
|
Application for extension of time refused; appeal struck out as abuse of process; indemnity costs of £24,727.20 ordered; permission to appeal denied | 24727 GBP | — | |
AI UseBandla denied using AI, claiming instead to have relied on Google searches to locate “supportive” case law. He admitted that he did not verify any of the citations and never checked them against official sources. The court found this unacceptable, particularly from someone formerly admitted as a solicitor. Hallucination DetailsBandla’s submissions cited at least 27 cases which the Solicitors Regulation Authority (SRA) could not locate. Bandla maintained summaries and quotations from these cases in formal submissions. When pressed in court, he admitted having never read the judgments, let alone verified their existence. Ruling/SanctionThe High Court refused the application for an extension of time, finding Bandla’s explanations inconsistent and unreliable. The court independently struck out the appeal on grounds of abuse of process due to the submission of fake authority. It imposed indemnity costs of £24,727.20. The judge emphasized that even after being alerted to the fictitious nature of the cases, Bandla neither withdrew nor corrected them. Key Judicial ReasoningThe court found Bandla’s conduct deeply troubling, noting his previous experience as a solicitor and his professed commitment to legal standards. It held that the deliberate or grossly negligent inclusion of fake case law—especially in an attempt to challenge a disciplinary disbarment—was an abuse requiring strong institutional response. |
|||||||||
| Department of Justice v Wise | Queensland Civil and Administrative Tribunal (Australia) | 13 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(3),
Legal Norm
(2)
|
Warning | — | — | |
|
The second respondent, Carly Dakota Wise, a self-represented litigant, filed an application for the recusal of a tribunal member, alleging bias and procedural unfairness. The application was based on several grounds, including fabricated legal citations - despite Ms. Wise having been warned in interlocutory proceedings to check the authorities she relied on. The court cited the local Guidelines for the Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers, available here, to stress that self-represented litigants need to check the accuracy of their pleadings. |
|||||||||
| Newbern v. Desoto County School District et al. | N.D. Mississippi (USA) | 12 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1),
other
(1)
|
Case dismissed, in part as a sanction for fabrication of legal authorities | — | — | |
AI UseThe court found that several of the cases cited by the plaintiff in her briefing opposing Officer Hill’s qualified immunity defense did not exist. Although Newbern suggested the citations may have been innocent mistakes, she did not challenge the finding of fabrication. No AI tool was admitted or named, but the structure and specificity of the invented cases strongly suggest generative AI use. Hallucination DetailsThe fabricated authorities were not background references, but “key authorities” cited to establish that Hill’s alleged conduct violated clearly established law. The court observed that the fake cases initially appeared to be unusually on-point compared to the rest of plaintiff’s citations, which raised suspicion. Upon scrutiny, it confirmed they did not exist. Ruling/SanctionThe court dismissed the federal claims against Officer Hill as a partial sanction for plaintiff’s fabrication of legal authority and failure to meet the burden under qualified immunity. However, it declined to dismiss the entire case, citing the interest of the minor child involved and the relevance of potential state law claims. It permitted discovery to proceed on those claims to determine whether Officer Hill acted with malice or engaged in other conduct falling outside the scope of Mississippi Tort Claims Act immunity. Key Judicial ReasoningThe court found that plaintiff’s citation of fictitious cases undermined her effort to meet the demanding “clearly established” standard. It rejected her claim that the fabrication was an innocent mistake and viewed it in light of her broader litigation conduct, which included excessive filings and disregard for procedural limits. |
|||||||||
| Crypto Open Patent Alliance v. Wright (2) | UK (UK) | 12 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(4),
Legal Norm
(2),
other
(2)
|
General Civil Restraint Order (GCRO) granted for 3 years; Case referred to Attorney General; Costs awarded to applicants. | 100000 GBP | — | |
AI UseDr. Wright, after beginning to represent himself, repeatedly used AI engines (such as ChatGPT or similar) to generate legal documents. These documents were characterized by the court as "highly verbose and repetitious" and full of "legal nonsense". This use of AI contributed to filings containing numerous false references to authority and misrepresentations of existing law. Hallucination DetailsWhile the core issue in Dr. Wright's litigation was his fundamental dishonesty (claiming to be Satoshi Nakamoto based on "lies and ... elaborately forged documents" ), the use of AI introduced specific problems. His appeal documents, bearing signs of AI creation, contained "numerous false references to authority". His later submissions also involved "citation of non-existent authorities". This AI-driven production of flawed legal arguments formed part of his broader pattern of disrespect for court rules and process. Ruling/SanctionMr Justice Mellor granted a General Civil Restraint Order (GCRO) against Dr. Wright for a three-year period. He found that an Extended CRO (ECRO) would be insufficient given the scope and persistence of Dr. Wright's abusive litigation. The court also referred Dr. Wright's conduct to the Attorney General for consideration of a civil proceedings order under s.42 of the Senior Courts Act 1981. Dr. Wright was ordered to pay the applicants' costs for the CRO application, summarily assessed at £100,000. Key Judicial ReasoningThe court found "overwhelming" evidence that Dr. Wright had persistently brought claims that were Totally Without Merit (TWM), numbering far more than the required threshold. This conduct involved extensive lies and forgeries across multiple jurisdictions and targeted individuals who often lacked the resources to defend themselves. The judge concluded there was a "very significant risk" that Dr. Wright would continue this abusive conduct unless restrained. The court noted his consistent contempt for court rules and processes, including his perjury, forgery, breach of orders, and flawed submissions (including those using AI). A GCRO was deemed just and proportionate to protect both potential future litigants and the finite resources of the court system |
|||||||||
| Matter of Raven Investigations & Security Consulting B-423447 | GAO (USA) | 7 May 2025 | Pro Se Litigant | Unidentified | Multiple fabricated citations to prior GAO decisions | Warning | — | — | |
AI UseGAO requested clarification after identifying case citation irregularities. The protester confirmed that their representative was not a licensed attorney and had relied on a combination of public tools, AI-based platforms, and secondary summaries, which produced fabricated or misattributed citations. Hallucination DetailsExamples included:
The fabrications mirrored patterns typical of AI hallucinations. Ruling/SanctionAlthough the protest was dismissed on academic grounds, GAO addressed the citation misconduct. It did not impose sanctions in this case but warned that future submission of non-existent authority could lead to formal disciplinary action—including dismissal, cost orders, and bar referrals (in the case of attorneys). |
|||||||||
| Rotonde v. Stewart Title Insurance Co | NY SC (USA) | 6 May 2025 | Pro Se Litigant | Implied | Several non-existent legal citations | Motion to dismiss granted in full; no sanction imposed, but court formally warned plaintiff | — | — | |
AI UseThe court observed that “some of the cases that plaintiff cites… do not exist,” and noted it had “tried, in vain,” to find them. While no explicit AI use is admitted by the plaintiff, the pattern and specificity of the fabricated citations are characteristic of LLM-generated hallucinations. Ruling/SanctionThe court dismissed all five causes of action—including negligence, tortious interference, aiding and abetting fraud, declaratory judgment, and breach of implied covenant of good faith and fair dealing—as either untimely or duplicative/deficient on the merits. It declined to impose sanctions but explicitly invoked Dowlah v. Professional Staff Congress, 227 AD3d 609 (1st Dept. 2024), and Will of Samuel, 82 Misc 3d 616 (Sur. Ct. 2024), to warn plaintiff that any future citation of fictitious cases would result in sanctions. Key Judicial ReasoningJustice Jamieson noted that while the court is “sensitive to plaintiff's pro se status,” that does not excuse disregard of procedural rules or the submission of fictitious citations. The court emphasized that its prior decision in related litigation in 2022 undermined plaintiff’s tolling claims, and that Executive Order extensions during the COVID-19 pandemic did not rescue otherwise-expired claims. The hallucinated citations failed to salvage plaintiff’s fraud and tolling theories, and their use was treated as an aggravating—though not yet sanctionable—factor. |
|||||||||
| X v. Board of Trustees of Governors State University | N.D. Illinois (USA) | 6 May 2025 | Pro Se Litigant | Implied | One fabricated citation | Warning | — | — | |
|
"For that principal [sic] [X] cites a case, Gunn v. McKinney, 259 F.3d 824, 829 (7th Cir. 2001), which neither defense counsel nor the Court has been able to locate. The Court reminds [X] that Federal Rule of Civil Procedure 11 applies to pro se litigants, and sanctions may result from such conduct, especially if the citation to Gunn was not merely a typographical or citation error but instead referred to a non-existent case. By presenting a pleading, written motion, or other paper to the Court, an unrepresented party acknowledges they will be held responsible for its contents. See Fed. R. Civ. P. 11(b)." |
|||||||||
| Harris v. Take-Two Interactive Software | D. Colorado (USA) | 6 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
|
Warning | — | — | |
|
Court held that: "The use of fictitious quotes or cases in filings may subject a party, including a pro se party, to sanctions pursuant to Federal Rule of Civil Procedure 11 as “pro se litigants are subject to Rule 11 just as attorneys are.” |
|||||||||
| Wilt v. Department of the Navy | E.D. Texas (USA) | 2 May 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
|
Warning | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Lozano González v. Roberge | Housing Administrative Tribunal (Canada) | 1 May 2025 | Pro Se Litigant | ChatGPT |
False Quotes
Legal Norm
(1)
|
— | — | ||
|
The landlord sought to repossess a rental property, claiming the lease renewal was suspended based on a misinterpretation of Quebec's civil code articles. He used ChatGPT to translate these articles, which resulted in a completely different meaning. The Tribunal found the repossession request invalid as it was based on a date prior to the lease's end. The Tribunal rejected the claim of abuse, accepting the landlord's sincere belief in his misinterpretation, influenced by AI translation, and noted his language barrier and residence in Mexico. The Tribunal advised the landlord to seek reliable legal advice in the future. |
|||||||||
| Gustafson v. Amazon.com | D. Arizona (USA) | 30 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1)
|
Warning | — | — | |
| Moales v. Land Rover Cherry Hill | D. Connecticut (USA) | 30 April 2025 | Pro Se Litigant | Unidentified |
Misrepresented
Case Law
(1),
Legal Norm
(4)
|
Plaintiff warned to ensure accuracy of future submissions | — | — | |
AI UseThe court stated that “Moales may have used artificial intelligence in drafting his submissions,” citing widespread concerns over AI hallucination. It noted that several citations in his complaint and show-cause response were plainly incorrect or irrelevant. While Moales did not admit AI use, the court cited Strong v. Rushmore Loan Mgmt. Servs., 2025 WL 100904 (D. Neb.) and Mata v. Avianca to contextualize its concern. Hallucination DetailsCited Ernst & Ernst v. Hochfelder, 425 U.S. 185 (1976), and S.E.C. v. W.J. Howey Co., 328 U.S. 293 (1946) as supporting the existence of a federal common law fiduciary duty—an inaccurate legal proposition. The court characterized such misuses as “the norm rather than the exception” in Moales’s submissions. It stopped short of identifying all misused authorities but made clear that the inaccuracies were pervasive. Ruling/SanctionThe complaint was dismissed for lack of subject matter jurisdiction under Rule 12(h)(3). Moales was permitted to file an amended complaint by May 28, 2025, but was warned that future filings must be factually and legally accurate. The court declined to reach the venue issue or impose immediate sanctions but warned Moales that misrepresentation of law may violate Rule 11. Key Judicial ReasoningThe court found no basis for federal question jurisdiction and rejected Moales’s reliance on the Declaratory Judgment Act, constructive trust theories, and a nonexistent “federal common law of securities.” It also held that Moales failed to plausibly allege the amount in controversy necessary for diversity jurisdiction. |
|||||||||
| Willis v. U.S. Bank National Association as Trustee, Igloo Series Trust | N.D. Texas, Dallas Division (USA) | 28 April 2025 | Pro Se Litigant | Implied | Fabricated citation(s) | Warning | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Simpson v. Hung Long Enterprises Inc. | B.C. Civil Resolution Tribunal (Canada) | 25 April 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(4)
Misrepresented
Legal Norm
(1)
|
Other side compensated for time spent through costs order (500 CAD) | — | — | |
|
"Ms. Simpson referred to a non-existent CRT case to support a patently incorrect legal position. She also referred to three Supreme Court of Canada cases that do not exist. Her submissions go on to explain in detail what legal principles those non-existent cases stand for. Despite these deficiencies, the submissions are written in a convincingly legal tone. Simply put, they read like a lawyer wrote them even though the underlying legal analysis is often wrong. These are all common features of submissions generated by artificial intelligence." [...] "25. I agree with Hung Long that there are two extraordinary circumstances here that justify compensation for its time. The first is Ms. Simpson’s use of artificial intelligence. It takes little time to have a large language model create lengthy submissions with many case citations. It takes considerably more effort for the other party to wade through those submissions to determine which cases are real, and for those that are, whether they actually say what Ms. Simpson purported they did. Hung Long’s owner clearly struggled to understand Ms. Simpson’s submissions, and his legal research to try to understand them was an utter waste of his time. I reiterate my point above that Ms. Simpson’s submissions cited a non-existent case in support of a legal position that is the precise opposite of the existing law. This underscores the impact on Hung Long. How can a self-represented party respond to a seemingly convincing legal argument that is based on a case it is impossible to find? 26. I am mindful that Ms. Simpson is not a lawyer and that legal research is challenging. That said, she is responsible for the information she provides the CRT. I find it manifestly unfair that the burden of Ms. Simpson’s use of artificial intelligence should fall to Hung Long’s owner, who tried his best to understand submissions that were not capable of being understood. While I accept that Ms. Simpson did not knowingly provide fake cases or misleading submissions, she was reckless about their accuracy." |
|||||||||