This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (914 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Twist It Up, Inc. v. Annie International, Inc. | United States District Court, Central District of California (USA) | 27 June 2025 | Lawyer | Implied |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1)
|
Monetary Sanction | 500 USD | — | |
|
Penalty decided in an order from the next day. |
|||||||||
| Auto Test Ltd. v. Ministry of Transport | Tel Aviv-Yafo District Court (Israel) | 25 June 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Motion for Costs denied | — | — | |
|
"Regarding the petitioner, this is a case of improper conduct, to say the least, on the part of its counsel (who apologized for it), who made use of artificial intelligence in the petition and in the supplementary argument, in which many non-existent and/or erroneous judgments were inserted and embedded. In accordance with the Supreme Court's ruling, there would have been grounds, as a result, for dismissing the petition outright, but I did not do so due to the conduct of the state, as detailed above, and due to the importance of publishing the tender. However, in this case, there is no place to award costs in favor of the petitioner, due to this improper conduct (as, beyond that, the petition also requested many remedies, some of which are not within the jurisdiction of this court)." (Translation by Gemini 2.5.) |
|||||||||
| Schoene v. Oregon Department of Human Services | United States District Court for the District of Oregon (USA) | 25 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(5)
|
Warning | — | — | |
|
"Before addressing the merits of Schoene’s motion, the Court notes that Schoene cited several cases in her reply brief to support her motion to amend, including Butler v. Oregon, 218 Or. App. 114 (2008), Curry v. Actavis, Inc., 2017 LEXIS 139126 (D. Or. Aug. 30, 2017), Estate of Riddell v. City of Portland, 194 Or. App. 227 (2004), Hampton v. City of Oregon City, 251 Or. App. 206 (2012), and State v. Burris, 107 Or. App. 542 (1991). These cases, however, do not exist. Schoene’s false citations appear to be hallmarks of an artificial intelligence (“AI”) tool, such as ChatGPT. It is now well known that AI tools “hallucinate” fake cases. See Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. Ct. App. 2024) (noting, in February 2024, that the issue of fictitious cases being submitted to courts had gained “national attention”).6 In addition, the Court notes that a basic internet search seeking guidance on whether it is advisable to use AI tools to conduct legal research or draft legal briefs will explain that any legal authorities or legal analysis generated by AI needs to be verified. The Court cautions Schoene that she must verify the accuracy of any future citations she may include in briefing before this Court and other courts" |
|||||||||
| Dastou v. Holmes | Massachusetts (USA) | 25 June 2025 | Lawyer | ChatGPT | Fabricated citations and false quotes | CLE Course obligation; endorsement of decision not to bill client | — | — | |
| Romero v. Goldman Sachs Bank USA | S.D.N.Y. (USA) | 25 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
Warning | — | — | |
| Hussein v. Canada | Ottawa (Canada) | 24 June 2025 | Lawyer | Visto.Ai |
Fabricated
Case Law
(1)
|
Monetary Sanction | 100 CAD | — | |
|
In the original order, the court held: "[38] Applicants’ counsel provided further correspondence advising, for the first time, of his reliance on Visto.ai described as a professional legal research platform designed specifically for Canadian immigration and refugee law practitioners. He also indicated that he did not independently verify the citations as they were understood to reflect well established and widely accepted principles of law. In other words, the undeclared and unverified artificial intelligence had no impact, and the substantive legal argument was unaffected and supported by other cases. [39] I do not accept that this is permissible. The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law. [40] In fact, the two case hallucinations were not the full extent of the failure of the artificial intelligence product used. It also hallucinated the proper test for the admission on judicial review of evidence not before the decision-maker and cited, as authority, a case which had no bearing on the issue at all. To be clear, this was not a situation of a stray case with a variation of the established test but, rather, an approach similar to the test for new evidence on appeal. As noted above, the case relied upon in support of the wrong test (Cepeda-Gutierrez) has nothing to do with the issue. I note in passing that the case comprises 29 paragraphs and would take only a few minutes to review. [41] In addition, counsel’s reliance on artificial intelligence was not revealed until after the issuance of four Directions. I find that this amounts to an attempt to mislead the Court and to conceal the reliance by describing the hallucinated authorities as “mis-cited” Had the initial request for a Book of Authorities resulted in the explanation in the last letter, I may have been more sympathetic. As matters stand, I am concerned that counsel does not recognize the seriousness of the issue." In the final order, the court added: "While the use of generative AI is not the responsibility of the responding party, it was not appropriate for the Respondent to not make any response to the Court’s four directions and Order. Indeed, assuming that the Respondent noticed the hallucinated cases on receipt of the written argument, it should have brought this to the attention of the Court. [...] Given that Applicant’s counsel was not remunerated for his services in the file, which included the motion on which the offending factum was filed and a motion for a stay of removal and, in addition, that I am also of the view that the Respondent’s lack of action exacerbated matters and it should not benefit as a result, I am ordering a modest amount of $100 to be payable by Applicant’s counsel personally." |
|||||||||
| Mintvest Capital, LTD v. NYDIG Trust Company, et al. | D.C. Puerto Rico (USA) | 23 June 2025 | Lawyer | Claude |
Fabricated
Case Law
(3)
False Quotes
Case Law
(5)
Misrepresented
Case Law
(3)
|
Order to pay opposing counsel's fees | 1 USD | — | |
|
Plaintiff's counsel in the case of Mintvest Capital, LTD v. NYDIG Trust Company, et al., was found to have included numerous non-existent cases, false quotations, and misrepresented precedents in their filings. The errors were attributed to the use of the AI tool 'Claude' without proper verification. The court recommended sanctions under Rule 11, requiring the attorney to pay the defendants' attorney fees related to the faulty submissions. The court emphasized the need for attorneys to ensure the accuracy of citations, especially when using AI tools, to maintain professional standards. |
|||||||||
| Malone & Anor v Laois County Council & Ors | High Court (Ireland) | 23 June 2025 | Pro Se Litigant | Implied |
Fabricated
Legal Norm
(1)
False Quotes
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1),
Legal Norm
(1)
|
Warning | — | — | |
|
Referring to Ayinde, the judge held that "The principle is essentially the same - though I hasten to say that I would not push the analogy too far as to a factual comparison of the present case with that case and the error in the present case is not of the order of the misconduct in that case. However, appreciable judicial time was wasted on the issue - not least trying to find the source of the quotation. And it does illustrate:
43. All that said, in a substantive sense, the issue is not vital to this case. The underlying proposition for which Mr Malone contends - that domestic courts must implement EU law - is uncontroversial. Not least for that reason, and in light also of the manner in which Mr Malone generally presented his case at the hearing, I am inclined to accept that there was no attempt or intention to mislead and accept also that Mr Malone has apologized for the error. It does not affect the outcome of the present motions." |
|||||||||
| Iskenderian v. Southeastern | Hawai'i (USA) | 23 June 2025 | Lawyer | Fabricated citations | N/A | — | — | ||
|
After other side pointed out that all authorities cited were fictitious, Counsel admitted it in brief. The court seemingly did not react. |
|||||||||
| Bottrill v Graham & Anor (No 2) | District Court of New South Wales (Australia) | 20 June 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
The second defendant's Notice of Motion for summary dismissal of the plaintiff’s claim was dismissed, with costs reserved to the trial judge. | — | — | |
|
"When the parties came before the court on 22 May 2025, there had been little time for the plaintiff, the first defendant and the court to examine the second defendant’s written submissions served late on the night before. It was nevertheless immediately apparent that the second defendant sought to rely upon authority and court rules which were not merely misstated but, in some circumstances, imaginary. I am satisfied that all of the judgments and rules referred to in the submissions of 21 May 2025 were misstated, non-existent, or both, and that Gen AI had been used to prepare these submissions. An example was the citation of a decision of the Supreme Court of New South Wales described as “Wu v Wilks” (I will not provide the citation given in full, as there is a risk of it being picked up as genuine by other Gen AI: Luck v Secretary, Services Australia [2025] FCAFC 26 at [14]). There is no decision with this name, either in the Supreme Court of New South Wales or in any other jurisdictions. The caselaw citation given for “Wu v Wilks” belonged to a judgment on wholly unrelated material and the principles of law for which it was cited. All of the citations suffered similar problems. I drew these issues to the attention of the second defendant and enquired whether she had used Gen AI in the preparation of her submissions and, if so, whether she was aware of the Practice Note. She acknowledged that she had done so but said this was because she had very little time to provide submissions in reply and was deeply distressed by these proceedings" |
|||||||||
| Pro Health Solutions Ltd v ProHealth Inc | Intellectual Property Office (UK) | 20 June 2025 | Pro Se Litigant, Lawyer | ChatGPT |
False Quotes
Case Law
(3)
Misrepresented
Case Law
(6),
Doctrinal Work
(2)
|
Warning; No costs awarded for the appeal since both sides seemingly erred | — | — | |
|
Claimant used Chat GPT to assist in drafting his grounds of appeal and skeleton argument. The documents included fabricated citations and misrepresented case summaries. Claimant admitted to using Chat GPT and apologized for the errors. Compounding matters, the court suspected that the respondent had also used AI, since the cases cited in the Counsel's skeleton, though extant, did not support any of the propositions made - and Counsel was unable to explain how they got there. |
|||||||||
| Reilly v. Conn. Interlocal Risk Mgmt. Agency | D. Connecticut (USA) | 20 June 2025 | Pro Se Litigant | Implied |
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
Warning | — | — | |
|
"Artificial intelligence may ultimately prove a helpful tool to assist pro se litigants in bringing meritorious cases to the courts. In that way, artificial intelligence has the potential to contribute to the cause of justice. However, accessing any beneficial use of artificial intelligence requires carefully understanding its limitations. For example, if merely asked to write an opposition to an opposing party’s motion or brief, or to respond to a court order, an artificial intelligence program is likely to generate such a response, regardless of whether the response actually has an arguable basis in the law. Where the court or opposing party was correct on the law, the program will very likely generate a response or brief that includes a false statement of the law. And because artificial intelligence synthesizes many sources with varying degrees of trustworthiness, reliance on artificial intelligence without independent verification renders litigants unable to represent to the Court that the information in their filings is truthful." |
|||||||||
| J.R.V. v. N.L.V. | SC British Columbia (Canada) | 19 June 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
Costs to the claimant in the amount of $200. | 200 CAD | — | |
|
In the case of J.R.V. v. N.L.V., the respondent, appearing in person, used a generative AI tool to prepare parts of her written argument. This resulted in the inclusion of citations to non-existent cases, known as 'hallucinations.' The claimant sought costs due to the need to research and respond to these false citations. The court acknowledged the issue but noted that the respondent was not represented by counsel and was unaware of the AI's capability to generate false citations. Moreover, the claimant was wrong as to the alleged non-existence of some citations. The court ordered the respondent to pay $200 in costs to the claimant. |
|||||||||
| In re Marriage of Isom and Kareem | Illinois CA (USA) | 16 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
The appeal was denied, and the trial court's decision was affirmed. | — | — | |
| Attorney General v. $32,000 in Canadian Currency | Ontario SCJ (Canada) | 16 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Warning | — | — | |
|
"[49] Mr. Ohenhen submitted a statement of legal argument to the court in support of his arguments. In those documents, he referred to at least two non-existent or fake precedent court cases, one ostensibly from the Court of Appeal for Ontario and another ostensibly from the British Columbia Court of Appeal. In reviewing his materials after argument, I tried to access these cases and was unable to find them. I asked the parties to provide them to me. [50] Mr. Ohenhen responded with a “clarification”, providing different citations to different cases. I asked for an explanation as to where the original citations came from, and specifically, whether they were generated by artificial intelligence. I have received no response to that query. [51] While Mr. Ohenhen is not a lawyer with articulated professional responsibilities to the court, every person who submits authorities to the court has an obligation to ensure that those authorities exist. Simple CanLII searches would have revealed to Mr. Ohenhen that these were fictitious citations. Putting fictitious citations before the court misleads the court. It is unacceptable. Whether the cases are put forward by a lawyer or self-represented party, the adverse effect on the administration of justice is the same. [52] Mr. Ohenhen’s failure to provide a direct and forthright answer to the court’s questions is equally concerning. [53] Court processes are not voluntary suggestions, to be complied with if convenient or helpful to one’s case. The proper administration of justice requires parties to respect the rules and proceed in a forthright manner. That has not happened here. [54] I have not attached any consequences to this conduct in this case. However, should such conduct be repeated in any court proceedings, Mr. Ohenhen should expect consequences. Other self-represented litigants should be aware that serious consequences from such conduct may well flow." |
|||||||||
| Taylor v. Cooper Power & Lighting Corp. | E.D.N.Y. (USA) | 13 June 2025 | Lawyer | Implied | One fabricated citation | Warning | — | — | |
|
"In Rutella's reply in support of his motion to vacate, he cites Green v. John H. Streater, Jr., 666 F.2d 119 (3d Cir. 1981). DE [69-1] at 8. When the Court was unable to locate Green or any case resembling it, the Court instructed Rutella's attorney, Kevin Krupnick, to either submit a copy of the case or show cause why he should not be sanctioned. See Electronic Order dated May 22, 2025. Krupnick admitted that he fabricated the Green case and claimed that he used it as a “placeholder” in a draft. DE [70-1]. It is implausible that an attorney would cite a case as specific as “Green v. John H. Streater, Jr., 666 F.2d 119 (3d Cir. 1981)” – which Krupnick admits does not exist – as a “placeholder” that he intended to replace. This is particularly true here, as Plaintiff had already cited the case that he subsequently claimed he intended to use. See DE [69-1] at 3 (citing Peralta v. Heights Med. Ctr., 485 U.S. 80 (1988)). Although Krupnick's conduct raises questions of his adherence to Fed. R. Civ. P. 11 (as he himself concedes), given the Court's recommendation that Rutella's motion to vacate be denied, the Court declines to recommend further action with respect to Rutella's misleading submission. " |
|||||||||
| Rochon Eidsvig & Rochon Hafer v. JGB Collateral | Texas CA (USA) | 12 June 2025 | Lawyer | Implied |
Fabricated
Case Law
(4)
|
8 mandatory hours of Continuous Legal Education on ethics and AI | — | — | |
|
"Regardless of whatever resources are used to prepare a party’s brief, every attorney has an ongoing responsibility to review and ensure the accuracy of filings with this and other courts. This includes checking that all case law cited in a brief actually exists and supports the points being made. It is never acceptable to rely on software or technology—no matter how advanced—without reviewing and verifying the information. The use of AI or other technology does not excuse carelessness or failure to follow professional standards. Technology can be helpful, but it cannot replace a lawyer’s judgment, research, or ethical responsibilities. The practice of law changes with the use of new technology, but the core duties of competence and candor remain the same. Lawyers must adapt to new tools without lowering their standards." |
|||||||||
| Reed v. Community Health Care | W.D. Washington (USA) | 10 June 2025 | Pro Se Litigant | Implied | Fabricated citations, false quotes | Warning | — | — | |
|
" Plaintiffs identify fictitious quotes and citations in their briefing to support their arguments. For example, Plaintiffs purport to quote language from S.H. Hold v. United States, 853 F.3d 1056, 1064 (9th Cir. 2017) and Green v. United States, 630 F.3d 1245, 1249 (9th Cir. 2011). (Reed 2, Dkt. No. 23 at 6.) However, the quoted language is nowhere found in those cases. Plaintiffs also cite to a case identified as “Horne v. Potter, 557 F.3d 953, 957 (9th Cir. 2009).” (Id. at 7.) That citation, however, is for a case titled Bell v. The Hershey Company. Plaintiffs appear to acknowledge they offered fictitious citations. (See Reed 2, Dkt. No. 27.) Plaintiffs are cautioned that providing fictitious cases and quotes will lead to sanctions. " |
|||||||||
| Rodney Chagas v. Fabricio Petinelli Vieira Coutinho | Parana State (Brazil) | 9 June 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Monetary fine (1% of case value) | — | — | |
|
In this rebuttal, the lawyer cited jurisprudence that the presiding judge (Relator) found to be "impressively so delineated and harmonious with the case". This prompted the judge to investigate the precedent more closely. He discovered that while the case number indicated was real, it belonged to a completely different case unrelated to the legal matter being discussed, leading to the suspicion of an AI "hallucination. The court concluded: "It is totally inconceivable to imagine that the Judiciary, already so burdened with countless lawsuits, needs to investigate all the case law set forth in the legal grounds reported by the parties in the procedural documents, despite the duty to act in good faith set forth in Article 5 of the CPC. After all, it is entirely based on the principle of trust expectation that all subjects act in accordance with existing and valid rules. Thus, even if the appellant claims that such conduct was the result of an error, a claim that has not been satisfactorily proven, but which is taken as a premise for the purposes of argumentation, in the present case, it would be, at the very least, an inexcusable, gross error resulting from serious misconduct, ruling out the possibility of proceeding without any implications in this judicial field, so as not to allow any hesitation in considering the aforementioned conduct as being clearly litigious in bad faith. Thus, as a result of having acted in a manifestly reckless manner (Art. 80, V of the CPC), I condemn the appellant for litigation in bad faith, and he must pay the fine set at 1% of the value of the case (Art. 81 of the CPC), in accordance with the grounds" Translated with DeepL.com (free version) |
|||||||||
| Zahariev v. Zaharieva | Supreme Court of British Columbia (Canada) | 9 June 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(3)
Misrepresented
Case Law
(1)
|
— | — | ||
| Chen v. Vana et al. | Haifa Magistrate's Court (Israel) | 8 June 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
|
Appeal was ultimately dismissed on merits; Monetary sanction | 3000 ILS | — | |
|
The appellant, a lawyer representing himself in an appeal, submitted a brief that included non-existent legal citations. An employee in the lawyer's office had used an AI system to assist in drafting the document. After the opposing counsel and the court itself were unable to locate the cited precedents, the lawyer admitted they were AI-generated and a "good-faith mistake" for which he took full responsibility. While citing its authority to dismiss the appeal entirely due to the submission of non-existent sources, the court chose a "softer" sanction. It proceeded to hear the case, ultimately dismissing the appeal on its merits and imposing a separate monetary penalty payable to the State Treasury for the misconduct. |
|||||||||
| Ayinde v. Haringey & Al-Haroun v. QNB | High Court (UK) | 6 June 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(8)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1),
Legal Norm
(1)
|
No contempt, but referral to professional bodies | — | — | |
|
This judgment, delivered on 6 June 2025 by the Divisional Court of the King's Bench Division, addresses two cases referred under the court's Hamid jurisdiction, which concerns the court's power to enforce duties lawyers owe to the court. Both cases involve lawyers submitting written arguments or evidence containing false information, specifically non-existent case citations, generated through the use of artificial intelligence without proper verification. The Court used this opportunity to issue broader guidance on the use of AI in legal practice, raising concerns about the competence, training, and supervision of lawyers. |
|||||||||
| Goins v. Father Flanagan's Boys Home | D. Nebraska (USA) | 5 June 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresented authorities | Warning | — | — | |
|
" This Court's local rules permit the use of generative artificial intelligence programs, but all parties, including pro se parties, must certify “that to the extent such a program was used, a human signatory of the document verified the accuracy of all generated text, including all citations and legal authority,” NECivR 7.1(d)(4)(B). The plaintiff's brief contains no such certification, nor does the plaintiff deny using artificial intelligence. See filing 27 at 9.
|
|||||||||
| Powhatan County School Board v. Skinger et al | E.D. Virginia (USA) | 2 June 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(37)
Misrepresented
Case Law
(6)
|
Relevant motions stricken | — | — | |
|
"The pervasive misrepresentations of the law in Lucas' filings cannot be tolerated. It serves to make a mockery of the judicial process. It causes an enormous waste of judicial resources to try to find cited cases that do not exist and to determine whether a cited authority is relevant or binding, only to determine that most are neither. In like fashion, Lucas' adversaries also must run to ground the nonexistent cases or address patently irrelevant ones. The adversaries must thus incur needless legal fees and expenses caused by Lucas' pervasive citations to nonexistent or irrelevant cases. [...] However, as previously noted Lucas appears to be judgment proof so monetary sanctions likely will not deter her from the abusive practices reflected in her filings and in her previously announced, consistently followed, abuse of the litigation proceedings created by the Individuals with Disabilities Education Act, 20 U.S.C. § 1400, et seq. (“IDEA”). So, the Court must find some other way to protect the interests of justice and to deter Lucas from the abuses which have come to mark her approach to participation as a defendant in the judicial process. In this case, the most appropriate remedy is to strike Lucas' filings where they are burdensome by virtue of volume and exceed permitted page limits, where they are not cogent or understandable (when given the generous latitude afforded pro se litigants), and where they misrepresent the law by citing nonexistent or utterly irrelevant cases." In a subsequent Opinion, the court declined to reconsider or review its findings, pointing out that: "To begin, it is unclear what Lucas means by "contested" citations. The citations that the Court found to not exist are not "contested." They simply do not exist. There is no contesting that fact because the Court checked each citation that was referenced in its MEMORANDUM OPINION, exactly as Lucas cited them (and through other research means), and could not find any citation that matched what Lucas cited. That research demonstrates that the Court's findings are, in fact, supported rather than "[u]nsupported." Id. Then, in no way did the Court "wrongly assume[]" that these citations to nonexistent legal authority were "'fabricated' due to the use of generative AI." Id. The Court meticulously checked every citation that it held did not exist in those decisions. Those decisions were not based on "assumptions" but, instead, on the fact that either (1) no case existed under the reporter citation, case name, or quotation that Lucas used, or (2) a case with the reporter citation did exist but was to an entirely different case than the one cited by Lucas and had no relevancy to the issues of this case. ECF No. 170, at 520. And, there was no incorrect assumption that those nonexistent legal authorities were generated, hallucinated, or fabricated by AI because Lucas admitted, on the record, to using AI when writing her filings with the Court. The fact that her citations to nonexistent legal authority are so pervasive, in volume and in location throughout her filings, can lead to only one plausible conclusion: that an AI program hallucinated them in an effort to meet whatever Lucas' desired outcome was based on the prompt that she put into the AI program. As the Court described in its MEMORANDUM OPINION, this is becoming an alarmingly prevalent occurrence common to AI programs. Id. at 23-26. It is exceedingly clear that it occurred here. [...] The MOTION also complains that the Court did not give Lucas an "opportunity to verify or correct citations." Id. Wholly apart from the fact that it is the litigant's (pro se or represented) burden to verify citations, there is no reason to have accorded Lucas the opportunity to verify because the problem was extensive and pervasive across at least six filings. Moreover, the Court actually did what should have been done before the MOTION was filed by determining that those citations do not exist. No further verification is necessary. And, after a diligent search, if the Court could not find the legal authorities that Lucas purported to rely upon and present as real and binding, it is a folly to believe that Lucas' efforts at "correction" would have returned anything different. Further, she could have taken the opportunity in this MOTION to go through—citation by citation—and "verify" or "correct" them to demonstrate to the Court that its findings were, in fact, incorrect, rather than just baldly and without evidence claiming them to be so. She did not do that." |
|||||||||
| Ivins v KMA Consulting Engineers & Ors | Queensland IRC (Australia) | 2 June 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
|
Relevant Submissions ignored | — | — | |
|
The Complainant, who was self-represented, used artificial intelligence to assist in preparing her submissions, which included fictitious case citations. The Commission noted the potential seriousness of relying on fabricated citations but did not impose any sanctions on the Complainant, as she was self-represented. The judge held: "In relation to the issue of the Complainant's reference to what appears to be fictitious case authorities, this is potentially a serious matter because it can be viewed as an attempt to mislead the Commission. If an admitted legal practitioner were to do this, there would be grounds to refer the practitioner to the Legal Services Commission for an allegation of misconduct. [...] Given that the Complainant is self-represented, I intend to take the same approach that I adopted in Goodchild and simply afford that part of the Complainant's submissions that deals with the two authorities no weight in determining the two applications. " |
|||||||||
| Ploni v. Wasserman et al. | Small Claims Court (Israel) | 1 June 2025 | Pro Se Litigant | ChatGPT; Google Search | Two Fabricated Citations | Monetary Fine | 250 ILS | — | |
|
" Directing the Court to nonexistent authorities wastes the Court’s time, a resource meant to serve the public, not be monopolized by a single litigant making baseless arguments. " |
|||||||||
| Andersen v. Olympus as Daybreak | D. Utah (USA) | 30 May 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresentation of past cases | Warning | — | — | |
|
In an earlier decision, the court had already warned the plaintiff against "any further legal misrepresentations in future communications". |
|||||||||
| Delano Crossing v. County of Wright | Minnesotta Tax Court (USA) | 29 May 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1),
Legal Norm
(2)
|
Breach of Rule 11, but no monetary sanction warranted; referred counsel to Lawyers Professional Responsibility Board | — | — | |
AI UseAttorneys for Wright County submitted a memorandum in support of a motion for summary judgment that contained five case citations generated by artificial intelligence; these citations did not refer to actual judicial decisions. Much of the brief appeared to be AI-written. The attorney who signed and filed the brief, acknowledged that the cited authorities did not exist and that much of the brief was drafted by AI. Ruling/SanctionThe Court found Counsel's conduct violated Rule 11.02(b) of the Minnesota Rules of Civil Procedure, as fake case citations cannot support any legal claim and there's an affirmative duty to investigate the legal underpinnings of a pleading. The Court found no merit in Counsel's defense, noting that the substitute cases she offered did not support the legal contentions in the brief, and the brief demonstrated a fundamental misunderstanding of legal standards. The Court did not find her insinuation that another, accurate motion document existed to be credible. Although the Court considered summarily denying the County's motion as a sanction, it ultimately denied the motion on its merits in a concurrent order because the arguments were so clearly incorrect. The Court declined to order further monetary sanctions, believing its Order to Show Cause and the current Order on Sanctions were sufficient to deter Counsel from relying solely on AI for case citations or legal conclusions in the future. However, the Court referred the matter concerning Counsel's conduct to the Minnesota Lawyers Professional Responsibility Board for further review, as the submission of an AI-generated brief with fake citations raised questions regarding her honesty, trustworthiness, and fitness as a lawyer. |
|||||||||
| Anita Krishnakumar et al. v. Eichler Swim and Tennis Club | CA SC (USA) | 29 May 2025 | Lawyer | Implied |
Fabricated
Case Law
(2)
|
Argument lost on the merits in tentative ruling | — | — | |
|
The underlying motion was later withdrawn, with the result that the tentative ruling was not adopted. |
|||||||||
| Mid Cent. Operating Eng'rs Health v. Hoosiervac | S.D. Ind. (USA) | 28 May 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(3)
|
Monetary Sanction | 6000 USD | — | |
|
(Earlier report and recommendation can be found here.) AI UseCounsel admitted at a show cause hearing that he used generative AI tools to draft multiple briefs and did not verify the citations provided by the AI, mistakenly trusting their apparent credibility without checking. Hallucination DetailsThree distinct fake cases across filings. Each was cited in a separate brief, with no attempt at Shepardizing or KeyCiting. Ruling/SanctionThe Court recommended a $15,000 sanction ($5,000 per violation), with the matter referred to the Chief Judge for potential additional professional discipline. Counsel was also ordered to notify Hoosiervac LLC’s CEO of the misconduct and file a certification of compliance. Eventually, the court fined Counsel $6,000, stressing that this was sufficient. Key Judicial ReasoningThe judge stressed that "It is one thing to use AI to assist with initial research, and even nonlegal AI programs may provide a helpful 30,000-foot view. It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented. Confirming a case is good law is a basic, routine matter and something to be expected from a practicing attorney. As noted in the case of an expert witness, an individual's "citation to fake, AI-generated sources . . . shatters his credibility." See Kohls v. Ellison, No. 0:24-cv-03754-LMP-DLM, Doc. 46 at *10 (D. Minn. Jan. 10, 2025)." |
|||||||||
| Ko v. Li | Ontario SCJ (Canada) | 28 May 2025 | Lawyer | ChatGPT |
Fabricated
Case Law
(3)
|
Plaintiff’s application dismissed; no costs imposed; court warns against future use of generative AI without verification | — | — | |
|
(Order to show cause is here.) At the end of the show cause proceedings, Justice Myers noted that, due to the media reports about this case, the goals of any further contempt proceedings were already met, including: "maintaining the dignity of the court and the fairness of civil justice system, promoting honourable behaviour by counsel before the court, denouncing serious misconduct, deterring similar future misconduct by the legal profession, the public generally, and by Ms. Lee specifically, and rehabilitation". The judge therefore declined to impose a fine or to continue the contempt proceedings, on the condition that Counsel undertakes Continuing Professional Development courses (as she said she would), and does not bill her client for any unrelated work (which was helped by the fact that she had so far been working pro bono). Sequel It later surfaced that Ms. Lee had not been fully honest with the court, leading to renewed contempt proceedings (see here). |
|||||||||
| GNX v. Children's Guardian | NSW (Australia) | 27 May 2025 | Pro Se Litigant | ChatGPT | One misrepresented precedent | Warning for continuation of proceedings | — | — | |
|
After the Applicant confessed having relied on ChatGPT for the written phase of the proceedings, hearing was adjourned and the Court "cautioned the Applicant on relying on ChatGPT for legal advice and suggested that the Applicant may wish to seek legal advice from a lawyer about his application." The court ultimately found that one of the authorities illustrated "the risk in relying on ChatGPT and Generative AI for legal advice. The Applicant’s description of the decision in this case in his submissions, for which he used ChatGPT to prepare, is clearly wrong. " |
|||||||||
| Brick v. Gallatin County | D. Montana (USA) | 27 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
| Mahala Association (מהל"ה) v. Clalit Health Services et al. | Israel | 26 May 2025 | Lawyer | Tachdin.AI |
Fabricated
Case Law
(1),
Exhibits or Submissions
(1),
Legal Norm
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(4),
Legal Norm
(1)
Outdated Advice
Overturned Case Law
(1)
|
Class action petition struck from the record; finding that Counsel was not fit to act in this case; Monetary sanctions | 50000 ILS | — | |
AI UseCounsel admitted that incorrect citations arose from reliance on an AI-enabled database called “Takdin AI.” The tool generated incorrect references to multiple Supreme Court decisions and falsely cited them as supporting key propositions. Counsel claimed the errors stemmed from time pressure and good faith, but the Court found the explanation inadequate. Hallucination DetailsAt least 8 citations were found to be fictitious or unrelated to the argument, including:
The hallucinated citations were used in response to motions to dismiss and as the basis for substantive legal claims in the class certification request. Ruling/SanctionThe Court:
Key Judicial ReasoningThe Court emphasized that the inclusion of hallucinated sources—regardless of intent—subverted proper legal process. Citations must be verified, and AI does not absolve attorneys from professional responsibility. The systemic risks posed by hallucinated filings necessitate a firm response going forward |
|||||||||
| R. v. Chand | Ontario (Canada) | 26 May 2025 | Lawyer | Implied |
Misrepresented
Case Law
(1)
|
Warning and Directions for Remainder of case | — | — | |
| So-and-so v. v. Anonymous | Israel (Israel) | 26 May 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1),
Legal Norm
(2)
|
AI use was noted by the lower court; no specific sanction for it | — | — | |
|
The Family Court noted that one motion cited case law that does "not exist at all". This raised "concern about uncontrolled use of artificial intelligence technology," referencing recent Supreme Court guidance on the need for an appropriate judicial response to such instances. On appeal, the District Court acknowledged the Family Court's finding regarding the non-existent case law and the suspicion of AI use. However, like the Family Court, it did not impose a separate sanction for this, as the appeal was dismissed primarily on the grounds of the delay and lack of merit concerning the protocol correction itself |
|||||||||
| Vechtel et al. v. Gershoni | Israel (Israel) | 25 May 2025 | Pro Se Litigant | Implied |
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
Motion denied | — | — | |
|
The judge pointed out that the plaintiffs' written request included what appeared to be a direct quote from one of her own previous judgments. However, upon examination, she found that not only did this quote not exist in the cited judgment, but the judgment itself did not even address the legal question at stake. |
|||||||||
| Concord v. Anthropic | N.D. California (USA) | 23 May 2025 | Expert | Claude.ai | Fabricated attribution and title for (existing) article | Part of brief was struck; court took it into account as a matter of expert credibility | — | — | |
|
Counsel's explanation of what happened can be found here. |
|||||||||
|
Source: Volokh
|
|||||||||
| Luther v. Oklahoma DHS | W.D. Oklahoma (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
|
" The Court has serious reason to believe that Plaintiff used artificial intelligence tools to assist in drafting her objection. While the use of such tools is not prohibited, artificial intelligence often cites to legal authorities, like Cabrera, that do not exist. Continuing to cite to non-existent cases will result in sanctions up to and including dismissal. " |
|||||||||
| Rotonde v. Stewart Title Insurance Company | New York (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |
| Nikolic & Anor v Nationwide News Pty Ltd & Anor | SC Victoria (Australia) | 23 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(3)
|
— | — | ||
| Garner v. Kadince | Utah C.A. (USA) | 22 May 2025 | Lawyer | ChatGPT |
Fabricated
Case Law
(1)
|
1000 USD | — | ||
AI UseThe fabricated citations originated from a ChatGPT query submitted by an unlicensed law clerk at Petitioner's law firm. Neither Counsel reviewed the petition’s contents before filing. The firm had no AI use policy in place at the time, though they implemented one after the order to show cause was issued. Hallucination DetailsChief among the hallucinations was Royer v. Nelson, which Respondents demonstrated existed only in ChatGPT’s output and in no official database. Other cited cases were also inapposite or unverifiable. Petitioner’s counsel admitted fault and stated they were unaware AI had been used during drafting. Ruling/SanctionThe court issued three targeted sanctions:
Key Judicial ReasoningThe panel (Per Curiam) emphasized that the conduct, while not malicious, still diverted judicial resources and imposed unnecessary burdens on the opposing party. Unlike Mata or Hayes, the attorneys in this case quickly admitted the issue and cooperated, which the court acknowledged. Nonetheless, the submission of fabricated law—especially under counsel's signature—breaches core duties of candor and verification, warranting formal sanctions. The court warned that Utah’s judiciary cannot be expected to verify every citation and must be able to trust lawyers to do so |
|||||||||
| Zherka v. Davey et al. | Massachusetts (USA) | 22 May 2025 | Pro Se Litigant | Unidentified | Fabricated citations | Motion struck, with leave to refile | — | — | |
|
After being ordered to show cause, plaintiff admitted having used AI for several filings. |
|||||||||
| Evans et al v. Robertson et al (1) | E.D. Michigan (USA) | 21 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Warning | — | — | |
| Bauche v. Commissioner of Internal Revenue | US Tax Court (USA) | 20 May 2025 | Pro Se Litigant | Implied | Nonexistent cases | Warning | — | — | |
|
" While in our discretion we will not impose sanctions on petitioner, who is proceeding pro se, we warn petitioner that continuing to cite nonexistent caselaw could result in the imposition of sanctions in the future. " |
|||||||||
| Versant Funding v. Teras Breakbulk Ocean Navigation Enterprises | S.D. Florida (USA) | 20 May 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
|
Joint and several liability for Plaintiff’s attorneys' fees and costs incurred in addressing the hallucinated citation; CLE requirement on AI ethics; Monetary fines | 1500 USD | — | |
AI UseFirst Counsel, who had not previously used AI for legal work, used an unspecified AI tool to assist with drafting a response. He failed to verify the citation before submission. Second Counsel, as local counsel, filed the response without checking the content or accuracy, even though he signed the document. Second Counsel then said that he had initiated "procedural safeguards to prevent this error from happening again by ensuring he, and local counsel, undertake a comprehensive review of all citations and arguments filed with this and every court prior to submission to ensure their provenance can be traced to professional non-AI sources." Hallucination DetailsThe hallucinated case was cited as controlling Delaware authority on privilege assignments. When challenged by Plaintiff, Defendants initially filed a bare withdrawal without explanation. Only upon court order did they disclose the AI origin and acknowledge the error. Counsel personally apologized to the court and opposing counsel. Ruling/SanctionJudge William Matthewman imposed a multi-part sanction:
The Court emphasized that the submission of hallucinated citations—particularly when filed and signed by two attorneys—constitutes reckless disregard for procedural and ethical obligations. Though no bad faith was found, the conduct was sanctionable under Rule 11, § 1927, the Court’s inherent authority, and local professional responsibility rules. Key Judicial ReasoningThe Court distinguished this case from more egregious incidents (O’Brien v. Flick, Thomas v. Pangburn) because the attorneys admitted their error and did not lie or attempt to cover it up. However, the delay in correction and failure to check the citation in the first place were serious enough to warrant monetary penalties and educational obligations. |
|||||||||
| Gjovik v. Apple Inc. | N.D. California (USA) | 19 May 2025 | Pro Se Litigant | Unidentified | Fabricated citation(s) | No sanctions imposed, but warning issued | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Ehrlich v. Israel National Academy of Sciences et al. | Israel (Israel) | 18 May 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
Request dismissed on the merits, monetary sanction | 500 ILS | — | |
|
Applicant sought an administrative court order to force the Israel National Academy of Sciences to let him speak at a conference on "Artificial Intelligence and Research: Uses, Prospects, Dangers". This was dismissed, with the court adding: " I will add this: As mentioned above, the subject of the conference where the applicant wishes to speak concerns, among other things, the dangers of artificial intelligence. Indeed, one of these dangers materialized in the applicant's request: He, who is not represented, stated clearly and fairly that he used artificial intelligence for his request. An examination of the request shows that it consequently suffered from 'AI hallucinations' – it mentioned many "judgments" that never came into existence (Regarding this problem, see: HCJ 38379-12-24 Anonymous v. The Sharia Court of Appeals Jerusalem, paragraphs 13-12 (23.2.2025) (hereinafter: the Anonymous matter); HCJ 23602-01-25 The Association for the Advancement of Dog Rights v. The Minister of Agriculture, paragraphs 12-11 (28.2.2025) (hereinafter: the Association matter); and regarding the mentioned problem and the possibility of participating in the conference, see: Babylonian Talmud, Gittin 43a). Just recently, this Court warned, in no uncertain terms, that alongside the blessings of artificial intelligence, one must take excellent care against its pitfalls; 'Eat its inside, throw away its peel' (Anonymous matter, paragraph 26; Association matter, paragraph 19). The applicant did state, clearly, that he used artificial intelligence, and in light of this, he further requested that if a 'technical' error occurred under his hand – it should be seen as a good-faith mistake, not to be held against him. I cannot accept such a request. It does not cure the problems of hallucinating artificial intelligence. Those addressing this Court, whether represented or unrepresented alike, bear the burden of examining whether the precedents they refer to - which are not a 'technical' matter, but rather the beating heart of the pleadings - indeed exist, and substantiate their claims. For this reason too - the request must be dismissed" (Translation by Gemini 2.5). |
|||||||||
| Keaau Development Partnership LLC v. Lawrence | Hawaii ICA (USA) | 15 May 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
|
Monetary sanction against counsel personally; no disciplinary referral | 100 USD | — | |
AI UseCounsel filed a motion to dismiss appeal that cited “Greenspan v. Greenspan, 121 Hawai‘i 60, 71, 214 P.3d 557, 568 (App. 2009).” The court found that:
Ruling/Sanction
The amount reflects counsel’s candor and corrective measures, but the court noted that federal courts have imposed higher sanctions in similar cases. |
|||||||||
| Beenshoof v. Chin | W.D. Washington (USA) | 15 May 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
No sanction imposed; court reminded Plaintiff of Rule 11 obligations | — | — | |
AI UseThe plaintiff, proceeding pro se, cited “Darling v. Linde, Inc., No. 21-cv-01258, 2023 WL 2320117 (D. Or. Feb. 28, 2023)” in briefing. The court stated it could not locate the case in any major legal database or via internet search and noted this could trigger Rule 11 sanctions if not based on a reasonable inquiry. The ruling cited Saxena v. Martinez-Hernandez as a cautionary example involving AI hallucinations, suggesting the court suspected similar conduct here. |
|||||||||