This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
See excluded examples.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (172 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media and online posts.2
Examples include:
- M. Hiltzik, AI ‘hallucinations’ are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
(Readers may also be interested in this project regarding AI use in academic papers.)
If you know of a case that should be included, feel free to
contact me.
Click to Download CSV
Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details |
---|---|---|---|---|---|---|---|---|
Johnson v. Dunn | N.D. Alabama (USA) | 23 July 2025 | Lawyer | ChatGPT | Fabricated citations | Public reprimand, disqualification from the case, and referral to the Bar | — | |
In their Response, Counsel confessed to the use of AI tools in their Response to the OSC. (As recounted by Above the Law, the law firm involved quickly deleted a recent post they made about using AI.) In the Order, the judge prefaced her findings by noting that "Even in cases like this one, where lawyers who cite AI hallucinations accept responsibility and apologize profusely, much damage is done. The opposing party expends resources identifying and exposing the fabrication; the court spends time reviewing materials, holding hearings, deliberating about sanctions, and explaining its ruling; the substance of the case is delayed; and public confidence about the trustworthiness of legal proceedings may be diminished." The court further reasoned that "At the threshold, the court rejects the invitation to consider that actual authorities stand for the proposition that the bogus authorities were offered to support. That is a stroke of pure luck for these lawyers, and one that did not remediate the waste and harm their misconduct wrought. Further, any sanctions discount on this basis would amplify the siren call of unverified AI for lawyers who are already confident in their legal conclusion. This court will have no part of that." It added that: "Likewise, the court rejects the invitation to consider that the involved lawyers and firm have been deeply embarrassed in media reports. For many very good reasons, courts traditionally have not relied on the media to do the difficult work of professional discipline, and this court is not about to start." |
||||||||
In Re CorMedix | D.C. New Jersey (USA) | 23 July 2025 | Judge | Implied | Misstated precedents, false quotes | Opinion withdrawn by judge | — | |
In a letter, the defendant pointed out the errors in the Opinion - prompting the judge to withdraw it through a minute order, without offering a rationale. |
||||||||
In re Boy | Illinois AC (USA) | 21 July 2025 | Lawyer | Unidentified | Fabricated citation(s), misrepresented authorities | Attorney ordered to disgorge payment and pay monetary sanctions | 7925 USD | |
Counsel was sanctioned for citing eight nonexistent cases in briefs filed on behalf of his client, in an appeal concerning the termination of parental rights. The court found that Counsel violated Illinois Supreme Court Rule 375 by submitting fictitious case citations generated by AI without verification. As a result, he was ordered to disgorge $6,925.62 received for his work on the appeal and pay an additional $1,000 in monetary sanctions. The court also directed that a copy of the opinion be sent to the Illinois Attorney Registration and Disciplinary Commission. |
||||||||
McCarthy v. DEA | 3rd Circuit CA (USA) | 21 July 2025 | Lawyer | Unidentified | Fabricated citation(s), misrepresented precedents | Relevant pleadings ignored; Order to show cause | — | |
In re Marla C. Martin | N.D. Illinois (Bankruptcy) (USA) | 18 July 2025 | Lawyer | ChatGPT | Fabricated citation(s) | Sanction of $5,500 and mandatory AI education | 5500 USD | |
"The first reason I issue sanctions stems from [Counsel]'s claim of ignorance—he asserts he didn't know the use of AI in general and ChatGPT in particular could result in citations to fake cases. Mr. Nield disputes the court's statement in Wadsworth v. Walmart Inc. (D. Wyo. 2025) that it is "well-known in the legal community that AI resources generate fake cases." Indeed, [Counsel] aggressively chides that assertion, positing that "in making that statement, the Wadsworth court cited no study, law school journal article, survey of attorneys, or any source to support this blanket conclusion." I find [Counsel]'s position troubling. At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud." [...] "If anything, [Counsel]’s alleged lack of knowledge of ChatGPT’s shortcomings leads me to do what courts have been doing with increasing frequency: announce loudly and clearly (so that everyone hears and understands) that lawyers blindly relying on generative AI and citing fake cases are violating Bankruptcy Rule 9011 and will be sanctioned" |
||||||||
Source: Volokh | ||||||||
Flycatcher v. Affable Avenue | S.D.N.Y. (USA) | 18 July 2025 | Lawyer | Unidentified | False quote(s) | Decision on sanctions reserved | — | |
Counsel submitted a response to an Order to Show Cause that included a false quote attributed to none other than Mata v. Avianca, a case about hallucinations. |
||||||||
Jordan et al. v. Chicago Housing Authority | Cook County District Court (USA) | 17 July 2025 | Lawyer | ChatGPT | Fabricated citation | Order to show cause | — | |
According to local press, counsel was called to a special hearing to discuss the citation to an hallucinated authority. |
||||||||
USA v. McGee et al. | Alabama D.C. (USA) | 16 July 2025 | Lawyer | Ghostwriter Legal | Fabricated cases | Counsel removed from the case | — | |
Folllowing a show cause order, Counsel admitted to having used the tool together with Google Search, and explained that, although he was aware of the issues with AI models like ChatGPT, he said he did not expect this tool to fall into the same issues. |
||||||||
Hatfield v. Ornelas | (USA) | 16 July 2025 | Pro Se Litigant | Unidentified | Fabricated citation(s) | Order to Show Cause | — | |
ByoPlanet International v. Johansson and Gilstrap | United States District Court, Southern District of Florida (USA) | 15 July 2025 | Lawyer, Paralegal | ChatGPT | Fabricated citation(s) | Cases dismissed without prejudice, attorney ordered to pay defendants' attorney fees, referred to Florida Bar. | 1 USD | |
In May, the court asked Counsel to show cause why they should not be sanctioned for filing briefs with hallucinations - especially since they continued filing hallucinated submissions after being warned about it. In their Answer, Counsel revealed that "specific citations and quotes in question were inadvertently derived from internal draft text prepared using generative AI research tools designed to expedite legal research and brief drafting". In the Order, the court noted that Counsel "was not candid to the Court when confronted about his use of AI, stating that some of these documents were “prepared under time constraints,” when he had nearly two more weeks before the deadline to submit his responses." The judge was also unimpressed by Counsel's attempt to shift the blame to a paralegal. |
||||||||
Blaser v. Campbell | Civil Resolution Tribunal (Canada) | 15 July 2025 | Pro Se Litigant | Implied | Fabricated citation(s) | Warning | — | |
Source: Steve Finlay | ||||||||
Woodrow Jackson v. Auto-Owners Insurance Company | M.D. Georgia (USA) | 14 July 2025 | Lawyer | Unidentified | Fabricated citation(s) | Monetary sanction of $1000, CLE requirement, reimbursement of attorney fees and costs to Defendant | 1 | |
Plaintiff's Counsel cited nine non-existent cases in a response to a motion to dismiss, which were generated using AI software. The court found this to be a violation of Rule 11, as the citations were not checked for accuracy. Counsel admitted the error, apologized, and explained the circumstances, including staff transitions and the use of AI. The court imposed a $1000 sanction, required Mr. Braddy to attend a CLE course on AI ethics, and ordered reimbursement of Defendant's attorney fees and costs. |
||||||||
Kessler v. City of Atwater | E.D. California (USA) | 11 July 2025 | Lawyer | Unidentified | Fabricated citation(s) | Order to show cause issued for potential sanctions | — | |
Lloyd’s Register Canada v. Munchang Choi | Federal Court of Canada (Canada) | 10 July 2025 | Pro Se Litigant | Unidentified | Fabricated citation(s) | Motion Record removed from Court file; costs awarded to Applicant | 500 CAD | |
The Respondent, a self-represented litigant, used generative AI tools for drafting and preliminary research, leading to the citation of a non-existent case, 'Fontaine v Canada, 2004 FC 1777', in his Motion Record. The Court found this to be a fabricated citation, and the (allegedly) intended citation pointed to an irrelevant case. The court further pointed out that the Respondent had already been caught fabricating citations in a previous proceeding. Despite acknowledging use of AI, the respondent had also failed to provide the declaration on this point required by the AI Practice Direction. The Court ordered the removal of the Motion Record from the file. Costs of $500 CAD were awarded to the Applicant. |
||||||||
Foster Chambers v. Village of Oak Park | 7the Circuit CA (USA) | 9 July 2025 | Pro Se Litigant | Implied | Fabricated citations, false quotes, misrepresented precedents | Order to show cause | — | |
Case No. 14748-08-21 | Supreme Court (Israel) | 9 July 2025 | Lawyer | Unidentified | Fabricated citation(s) | The court dismissed the fabricated evidence and imposed a fine. | 3000 ILS | |
Counsel tried to blame the software he used, as well as an intern; court was unimpressed |
||||||||
NCR v KKB | Court of King’s Bench of Alberta (Canada) | 9 July 2025 | Pro Se Litigant | Implied | Fabricated citation(s) | The court disregarded the fabricated citations and did not impose costs on the self-represented litigant. | — | |
Gurpreet Kaur v. Captain Joel Desso | NDNY (USA) | 9 July 2025 | Lawyer | Claude Sonnet 4 | False quote(s) | Monetary and professional sanctions | 1000 USD | |
Counsel confessed having Claude Sonnet 4 to draft a legal submission, which included fabricated quotations from legal authorities. Counsel said he was pressed by time. After holding that there is "no reason to distinguish between the submission of fabricated cases and the submission of fabricated quotations from real cases. In both postures, the attorney seeks to persuade the Court using legal authority that does not exist", the court held that Mr. Desmarais had violated Rule 11 of the Federal Rules of Civil Procedure by failing to verify the accuracy of the AI-generated content. Counsel was found to have acted in subjective bad faith, as he was aware of the potential for AI to hallucinate legal citations and failed to take corrective action even after the government pointed out the errors. The court imposed a $1,000 monetary sanction and required Counsel to complete a CLE course on the ethical use of AI in legal practice and notify his client of the issue. |
||||||||
Smith v. Gamble | Ohio CA (USA) | 7 July 2025 | Pro Se Litigant | Implied | Fabricated citation(s) | Sanctions granted against Father (TBD) | 1 USD | |
In the case of Smith v. Gamble, the appellant, Karen A. Gamble nka Smith, moved to strike the appellee brief filed by Gary C. Gamble III, alleging that several case citations were fraudulent, as they were either inaccurate, non-existent, or completely false. The court directed the appellee, Father, to provide copies of the cited cases, which he failed to do. Consequently, the court found that Father used nonexistent cases and inappropriate citations, likely generated by an AI tool, to support his arguments. The court denied the motion to strike the brief but granted sanctions against Father for the time and expense incurred by Mother in uncovering the fraudulent citations. The matter was referred to a magistrate to determine the appropriate amount of sanctions. |
||||||||
Source: Robert Freund | ||||||||
Coomer v. My Pillow, Inc. | D. Colorado (USA) | 7 July 2025 | Lawyer | Co-Pilot, Westlaw’s AI, Gemini, Grok, Claude, ChatGPT, Perplexity | Fabricated citations, false quotes, misrepresented precedents | Monetary Sanctions | 6000 USD | |
Prior Order to Show Cause available here. After reviewing - and dismissing - the factual allegations made by Counsel, and noting that they had submitted errata in parallel cases (dealing with other fabricated citations), the court swiftly concluded that they "have violated Rule 11 because they were not reasonable in certifying that the claims, defenses, and other legal contentions contained in Defendants’ Opposition to Motion in Limine [Doc. 283] were warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law." Both counsel were sanctioned with a 3,000 USD fine, payable to the court. |
||||||||
AQ v. BW | Civil Resolution Tribunal (Canada) | 4 July 2025 | Pro Se Litigant | Implied | Fabricated citation | Monetary Sanction | 1000 CAD | |
In the case AQ v. BW, the applicant AQ claimed damages for the non-consensual sharing of an intimate image by the respondent BW. Both parties were self-represented. The tribunal found that BW shared an intimate image of AQ without consent, violating the Intimate Images Protection Act (IIPA). BW attempted to defend their actions by citing a fabricated version of CRTA section 92, which was identified as a hallucination likely generated by artificial intelligence. Judge held: "16. I have considered my obligation to give sufficient reasons. I do not consider that obligation to include responding to arguments concocted by artificial intelligence that have no basis in law. I accept that artificial intelligence can be a useful tool to help people find the right language to present their arguments, if used properly. However, people who blindly use artificial intelligence often end up bombarding the CRT with endless legal arguments. They cannot reasonably expect the CRT to address them all. So, while I have reviewed all the parties’ materials and considered all their arguments, I have decided against addressing many of the issues they raise. If I do not address a particular argument in this decision, it is because the argument lacks any merit, is about something plainly irrelevant, or both." The tribunal dismissed BW's defenses as baseless and awarded AQ $5,000 in damages and an additional $1,000 for time spent due to BW's submission of irrelevant evidence. The tribunal emphasized that arguments concocted by AI without legal basis would not be addressed. |
||||||||
Source: Steve Finlay | ||||||||
Matter of Sewell Properties Trust | Colorado Court of Appeals (USA) | 3 July 2025 | Pro Se Litigant | Implied | Fabricated citations, quotes, and misrepresented precedents | Warning | — | |
The court noted that: "both Lehr-Guthrie's and McDonald's briefs are replete with errors in their citations to case authority, such as repeated citation errors, references to nonexistent quotes, and incorrect statements about the cases (for instance, as noted above, the two cases McDonald cited for a proposition relating to the duty of impartiality don't even reference that duty). This suggests to us that the briefs may have been drafted with the use of generative artificial intelligence (GAI). “[U]sing a GAI tool to draft a legal document can pose serious risks if the user does not thoroughly review the tool's output.” Al-Hamim v. Star Hearthstone, LLC, 2024 COA 128, ¶ 32. Self-represented litigants must be particularly careful, as they “may not understand that a GAI tool may confidently respond to a query regarding a legal topic ‘even if the answer contains errors, hallucinations, falsehoods, or biases.’ ” Id. (citation omitted).4 We advise the parties that errors caused by GAI in future filings may result in sanctions. See id. at ¶ 41. " The court warned that future errors caused by AI could result in sanctions. |
||||||||
Tyrone Walker v. Juliane Pierre | Massachusetts CA (USA) | 3 July 2025 | Lawyer | Implied | Fabricated citations | Struck from the record | — | |
"In a prior order, we struck portions of Walker's brief that included citations to nonexistent cases. We note that the arguments raised would not have changed the outcome of the appeal in any event. " |
||||||||
Sharita Hill v. State of Oklahoma | W.D. Oklahoma (USA) | 3 July 2025 | Lawyer | Implied | Fabricated citations, false quotes | Warning | — | |
"Further, these inaccuracies signal that Plaintiff's counsel may have used AI to assist in the drafting of Plaintiff's Response (or otherwise counsel produced exceptionally sloppy work). In this regard, this Court's Chambers Rules include “Disclosure and Certification Requirements” for use of “Generative Artificial Intelligence” and expressly provide that an attorney or party must disclose in any document to be filed with the Court “that AI was used and the specific AI tool that was used” and to “certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority.” See id.6 No such disclosure and certification has been made in this case. The Court's Rules further provide that an attorney will be responsible for the contents of any documents prepared with generative AI, in accordance with Rule 11 of the Federal Rules of Civil Procedure, and that the failure to make the disclosure and certification “may result in the imposition of sanctions.”" |
||||||||
Pulserate Investments v. Andrew Zuze and Others | Supreme Court (Zimbabwe) | 3 July 2025 | Lawyer | Unidentified | 12 fabricated citations | N/A | — | |
Counsel in charge apologised to the Court in a letter (available here), explaining that he had failed to supervise the work of his subordinates. |
||||||||
ATSum 0010525-47.2025.5.03.0037 | Regional Labour Court (Brazil) | 2 July 2025 | Lawyer | Unidentified | Fabricated citations | Monetary Sanctions | 1 | |
Counsel for the claimant admitted to using an AI tool to draft the initial legal document without verifying its content. This resulted in the creation of non-existent judicial precedents to support the claimant's case. The court found this to be a serious violation of procedural loyalty and good faith, as it attempted to deceive the court and the opposing party. Consequently, the claimant was fined 5% of the updated value of the case for litigating in bad faith, although the lawyer's apologies were partially accepted, preventing a higher fine. |
||||||||
Bucher v. Appeals Committee | Administrative Court (Israel) | 2 July 2025 | Lawyer | Implied | Fabricated citation(s) | Monetary penalty imposed | — | |
Rafi Najib v MSS Security Pty Limited | Fair Work Commission (Australia) | 2 July 2025 | Pro Se Litigant | Unidentified | Fabricated citation | Application dismissed | — | |
Source: Jay Iyer | ||||||||
Angela and Theodore Chagnon v. Holly Nelson | Chancery Court of Wyoming (USA) | 2 July 2025 | Pro Se Litigant | Implied | One fabricated citation, one misrepresented precedent | Order to show cause issued; potential striking of motion | — | |
Defendant Holly Nelson, appearing pro se, filed a motion to dismiss that included a fabricated case citation, Finch v. Smith, which does not exist. The court inferred that Nelson used AI to draft the motion without verifying the accuracy of the citations. The court issued an order to show cause, requiring Nelson to justify why her filing does not violate Rule 11, or alternatively, to withdraw her motion. If she fails to do so, the court intends to strike her motion entirely. |
||||||||
Source: Robert Freund | ||||||||
Anonymous Family Case matter | Köln District Court (Germany) | 2 July 2025 | Lawyer | Implied | Fictitious citations and references | Warning | — | |
Details of the case have first been reported in a LinkedIn post by a local lawyer. |
||||||||
Murray v. State of Victoria | Federal Court (Australia) | 2 July 2025 | Lawyer | Google Scholar (allegedly) | Fabricated citations; misrepresented precedents | Order of costs to other party; no professional referral | 1 AUD | |
"14 Here, the applicant's solicitor’s use of AI in the preparation of two court documents has given rise to cost, inconvenience and delay to the parties and has compromised the effectiveness of the administration of justice. But I do not consider the use of AI in this case means that it is appropriate to refer the solicitors’ conduct to the Victorian Legal Services Board. Here an inexperienced junior solicitor was given the task of preparing document citations for an amended pleading, and did so while working remotely and without access to the documents to be cited. In attempting to cite the relevant documents she used an (apparently AI-assisted) research tool which she considered had produced accurate citations when she previously used it. And as soon as Massar Briggs Law was told of the false citations the problem was addressed. The junior solicitor and the principal solicitor have apologised or expressed their regret to the other parties and the Court, and there was no suggestion that they were not genuine in doing so. 15 The junior solicitor took insufficient care in using Googe Scholar as the source of document citations in court documents, and in failing to check the citations against the physical and electronic copies of the cited documents that were held at Massar Briggs Law’s office. The error was centrally one of failing to check and verify the output of the search tool, which was contributed to by the inexperience of the junior solicitor and the failure of Mr Briggs to have systems in place to ensure that her work was appropriately supervised and checked. To censure those errors it is sufficient that these reasons be published." |
||||||||
Doe v. Noem | D.C. DC (USA) | 1 July 2025 | Lawyer | ChatGPT | One fabricated authority | Order to Show Cause | — | |
Fake citation, in this brief, was to : Moms Against Poverty v. Dep’t of State, 2022 WL 17951329, at *3 . Case docket can be found here. Counsel later confirmed having used ChatGPT and apologised. |
||||||||
Case No. 525309-08-22 | Jerusalem Enforcement and Collection Authority (Israel) | 30 June 2025 | Lawyer | Implied | Fabricated citations to inexistent norms | Warning | — | |
Northbound Processing v. South African Diamond Regulator | High Court (South Africa) | 30 June 2025 | Lawyer | Legal Genius | Fabricated cases and misrepresented precedents | Referral to the Legal Practice Council for investigation | — | |
"[92] In Mavundla, the court emphasised the trite duty of legal practitioners not to mislead the court, whether through negligence or intent. This includes the duty to present an honest account of the law, which means (inter alia) not presenting fictitious or non-existent cases.24 In my view, it matters not that such cases were not presented orally, but were contained in written heads of argument. Written heads are as important a memorial of counsel’s argument as oral argument and, for purely practical reasons, are often more heavily relied upon by judges. [...] [95] In this case, counsel’s explanations bear out their submission that there was no deliberate attempt to mislead the court in relation to the use of incorrect case citations in the heads of argument. Their apologies are acknowledged. As is clear from Mavundla, however, even negligence in this context may have grave repercussions particularly to the administration of justice and, in appropriate circumstances, could constitute serious professional misconduct. [96] As a consequence, it is appropriate to make the same order as in Mavundla, namely that the conduct of theapplicant’s legal practitioners is referred to the Legal Practice Council for investigation." |
||||||||
Shahid v. Esaam | Georgia CA (USA) | 30 June 2025 | Judge, Lawyer | Unidentified | Several fabricated cases, as well as misrepresented ones, some of which were adopted by the trial court below | Case remanded; monetary penalty | 2500 USD | |
" After the trial court entered a final judgment and decree of divorce, Nimat Shahid (“Wife”) filed a petition to reopen the case and set aside the final judgment, arguing that service by publication was improper. The trial court denied the motion, using an order that relied upon non-existent case law." "We are troubled by the citation of bogus cases in the trial court's order. As the reviewing court, we make no findings of fact as to how this impropriety occurred, observing only that the order purports to have been prepared by Husband's attorney, Diana Lynch. We further note that Lynch had cited the two fictitious cases that made it into the trial court's order in Husband's response to the petition to reopen, and she cited additional fake cases both in that Response and in the Appellee's Brief filed in this Court. " |
||||||||
Crespo v. Tesla, Inc. | S.D. Florida (USA) | 30 June 2025 | Pro Se Litigant | Implied | Fabricated citations, quotes, and misrepresented precedents | Plaintiff required to apologize and pay attorney's fees | 921 USD | |
In the case of Crespo v. Tesla, Inc., the pro se plaintiff, Leonardo Crespo, submitted discovery motions containing fabricated case citations and a false quote, which were identified as potentially generated by AI. The court ordered Crespo to show cause for these submissions and admitted to using AI in his filings. The court acknowledged Crespo's candor and imposed sanctions requiring him to apologize to the defendant's counsel and pay reasonable attorney's fees incurred by the defendant in addressing the fake citations. (In a subsequent ruling, the court averred that the reasonable fees amount was 921 USD.) |
||||||||
Parra v. United States | Court of Federal Claims (USA) | 27 June 2025 | Pro Se Litigant | Unidentified | Fabricated citations | Warning | — | |
Plaintiff Ravel Ferrera Parra, proceeding pro se, filed a lawsuit against the United States alleging financial harm due to misconduct by various judicial and governmental entities. The court dismissed the case for lack of jurisdiction, as the claims were not within the court's purview. The court noted that Plaintiff's filings appeared to be assisted by AI, as evidenced by the rapid filing of responses tell-tale language ("Would you like additional affidavits, supporting exhibits, or further refinements before submission?"), the inclusion of fabricated case citations. "While Plaintiff’s use of AI, by itself, does not violate this Court’s Rules, Plaintiff’s citation to fake cases does." The court further pointed out that: "“It is no secret that generative AI programs are known to ‘hallucinate’ nonexistent cases.” Sanders, 176 Fed. Cl. at 169 (citation omitted). That appears to have happened here. When searching the Federal Claims Reporter for “Tucker v. United States, 71 Fed. Cl. 326 (2006),” Plaintiff’s citation brings the Court to the third page of Grapevine Imports, Ltd. v. United States, 71 Fed. Cl. 324, 326 (2006), a real tax case from this Court. Similarly, the AI used by Plaintiff in Sanders v. United States, 176 Fed. Cl. 163, 169 (2025) also made up a citation to a case called Tucker v. United States. Perhaps both AI programs hallucinated this case name based on the Tucker Act, this Court’s jurisdictional statute. Regardless, here, as in Sanders, the citation to a case called Tucker v. United States does not exist." The court warned Plaintiff about the risks of using AI-generated content without verification but did not impose sanctions. |
||||||||
Sister City Logistics, Inc. v. John Fitzgerald | United States District Court for the Southern District of Georgia (USA) | 27 June 2025 | Pro Se Litigant | Fabricated citations and quotes | Warning | — | ||
The court observed that the original motion to remand filed pro se by the plaintiff contained non-existent case law and falsified quotations. Although the court did not impose sanctions in this instance, it warned that future use of fake legal authority would result in a show cause order, including against the Counsel who later joined the case. |
||||||||
Page v. Long | Melbourne County Court (Australia) | 27 June 2025 | Pro Se Litigant | Implied | 11 Fabricated citations; 2 misrepresented precedents | Litigant lost on merits | — | |
"Generative AI can be beguiling, particularly when the task of representing yourself seems overwhelming. However, a litigant runs the risk that their case will be damaged, rather than helped, if they choose to use AI without taking the time to understand what it produces, and to confirm that it is both legally and factually accurate. " |
||||||||
Jakes v. Youngblood | W.D. Penn. (USA) | 26 June 2025 | Lawyer | Unidentified | Multiple fabricated quotes, including from the court's previous opinions, and misrepresentations | Motion is dismissed; Order to show cause | — | |
Source: Volokh | ||||||||
Schoene v. Oregon Department of Human Services | United States District Court for the District of Oregon (USA) | 25 June 2025 | Pro Se Litigant | Implied | Fabricated citation | Warning | — | |
"Before addressing the merits of Schoene’s motion, the Court notes that Schoene cited several cases in her reply brief to support her motion to amend, including Butler v. Oregon, 218 Or. App. 114 (2008), Curry v. Actavis, Inc., 2017 LEXIS 139126 (D. Or. Aug. 30, 2017), Estate of Riddell v. City of Portland, 194 Or. App. 227 (2004), Hampton v. City of Oregon City, 251 Or. App. 206 (2012), and State v. Burris, 107 Or. App. 542 (1991). These cases, however, do not exist. Schoene’s false citations appear to be hallmarks of an artificial intelligence (“AI”) tool, such as ChatGPT. It is now well known that AI tools “hallucinate” fake cases. See Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. Ct. App. 2024) (noting, in February 2024, that the issue of fictitious cases being submitted to courts had gained “national attention”).6 In addition, the Court notes that a basic internet search seeking guidance on whether it is advisable to use AI tools to conduct legal research or draft legal briefs will explain that any legal authorities or legal analysis generated by AI needs to be verified. The Court cautions Schoene that she must verify the accuracy of any future citations she may include in briefing before this Court and other courts" |
||||||||
Romero v. Goldman Sachs Bank USA | S.D.N.Y. (USA) | 25 June 2025 | Pro Se Litigant | Implied | Fabricated citation(s), false quotes, misrepresented precedents | Warning | — | |
Dastou v. Holmes | Massachusetts (USA) | 25 June 2025 | Lawyer | ChatGPT | Fabricated citations and false quotes | CLE Course obligation; endorsement of decision not to bill client | — | |
Auto Test Ltd. v. Ministry of Transport | Tel Aviv-Yafo District Court (Israel) | 25 June 2025 | Lawyer | Implied | Non-existent and incorrect legal judgments | Motion for Costs denied | — | |
"Regarding the petitioner, this is a case of improper conduct, to say the least, on the part of its counsel (who apologized for it), who made use of artificial intelligence in the petition and in the supplementary argument, in which many non-existent and/or erroneous judgments were inserted and embedded. In accordance with the Supreme Court's ruling, there would have been grounds, as a result, for dismissing the petition outright, but I did not do so due to the conduct of the state, as detailed above, and due to the importance of publishing the tender. However, in this case, there is no place to award costs in favor of the petitioner, due to this improper conduct (as, beyond that, the petition also requested many remedies, some of which are not within the jurisdiction of this court)." (Translation by Gemini 2.5.) |
||||||||
Hussein v. Canada | Ottawa (Canada) | 24 June 2025 | Lawyer | Visto.Ai | Two fabricated citations and misrepresentation of applicable law | Monetary Sanction | 100 CAD | |
In the original order, the court held: "[38] Applicants’ counsel provided further correspondence advising, for the first time, of his reliance on Visto.ai described as a professional legal research platform designed specifically for Canadian immigration and refugee law practitioners. He also indicated that he did not independently verify the citations as they were understood to reflect well established and widely accepted principles of law. In other words, the undeclared and unverified artificial intelligence had no impact, and the substantive legal argument was unaffected and supported by other cases. [39] I do not accept that this is permissible. The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law. [40] In fact, the two case hallucinations were not the full extent of the failure of the artificial intelligence product used. It also hallucinated the proper test for the admission on judicial review of evidence not before the decision-maker and cited, as authority, a case which had no bearing on the issue at all. To be clear, this was not a situation of a stray case with a variation of the established test but, rather, an approach similar to the test for new evidence on appeal. As noted above, the case relied upon in support of the wrong test (Cepeda-Gutierrez) has nothing to do with the issue. I note in passing that the case comprises 29 paragraphs and would take only a few minutes to review. [41] In addition, counsel’s reliance on artificial intelligence was not revealed until after the issuance of four Directions. I find that this amounts to an attempt to mislead the Court and to conceal the reliance by describing the hallucinated authorities as “mis-cited” Had the initial request for a Book of Authorities resulted in the explanation in the last letter, I may have been more sympathetic. As matters stand, I am concerned that counsel does not recognize the seriousness of the issue." In the final order, the court added: "While the use of generative AI is not the responsibility of the responding party, it was not appropriate for the Respondent to not make any response to the Court’s four directions and Order. Indeed, assuming that the Respondent noticed the hallucinated cases on receipt of the written argument, it should have brought this to the attention of the Court. [...] Given that Applicant’s counsel was not remunerated for his services in the file, which included the motion on which the offending factum was filed and a motion for a stay of removal and, in addition, that I am also of the view that the Respondent’s lack of action exacerbated matters and it should not benefit as a result, I am ordering a modest amount of $100 to be payable by Applicant’s counsel personally." |
||||||||
Malone & Anor v Laois County Council & Ors | High Court (Ireland) | 23 June 2025 | Pro Se Litigant | Implied | One false quote from a fabricated authority | Warning | — | |
Referring to Ayinde, the judge held that "The principle is essentially the same - though I hasten to say that I would not push the analogy too far as to a factual comparison of the present case with that case and the error in the present case is not of the order of the misconduct in that case. However, appreciable judicial time was wasted on the issue - not least trying to find the source of the quotation. And it does illustrate:
43. All that said, in a substantive sense, the issue is not vital to this case. The underlying proposition for which Mr Malone contends - that domestic courts must implement EU law - is uncontroversial. Not least for that reason, and in light also of the manner in which Mr Malone generally presented his case at the hearing, I am inclined to accept that there was no attempt or intention to mislead and accept also that Mr Malone has apologized for the error. It does not affect the outcome of the present motions." |
||||||||
Mintvest Capital, LTD v. NYDIG Trust Company, et al. | D.C. Puerto Rico (USA) | 23 June 2025 | Lawyer | Claude | Fabricated citations, false quotes, and misrepresented precedents | Order to pay opposing counsel's fees | 1 USD | |
Plaintiff's counsel in the case of Mintvest Capital, LTD v. NYDIG Trust Company, et al., was found to have included numerous non-existent cases, false quotations, and misrepresented precedents in their filings. The errors were attributed to the use of the AI tool 'Claude' without proper verification. The court recommended sanctions under Rule 11, requiring the attorney to pay the defendants' attorney fees related to the faulty submissions. The court emphasized the need for attorneys to ensure the accuracy of citations, especially when using AI tools, to maintain professional standards. |
||||||||
Iskenderian v. Southeastern | Hawai'i (USA) | 23 June 2025 | Lawyer | Fabricated citations | N/A | — | ||
After other side pointed out that all authorities cited were fictitious, Counsel admitted it in brief. The court seemingly did not react. |
||||||||
Pro Health Solutions Ltd v ProHealth Inc | Intellectual Property Office (UK) | 20 June 2025 | Pro Se Litigant, Lawyer | ChatGPT | Fabricated citation(s); Misrepresented precedents | Warning; No costs awarded for the appeal since both sides seemingly erred | — | |
Claimant used Chat GPT to assist in drafting his grounds of appeal and skeleton argument. The documents included fabricated citations and misrepresented case summaries. Claimant admitted to using Chat GPT and apologized for the errors. Compounding matters, the court suspected that the respondent had also used AI, since the cases cited in the Counsel's skeleton, though extant, did not support any of the propositions made - and Counsel was unable to explain how they got there. |
||||||||
Bottrill v Graham & Anor (No 2) | District Court of New South Wales (Australia) | 20 June 2025 | Pro Se Litigant | Unidentified | References to non-existent and/or misstated judgments and legal principles. | The second defendant's Notice of Motion for summary dismissal of the plaintiff’s claim was dismissed, with costs reserved to the trial judge. | — | |
"When the parties came before the court on 22 May 2025, there had been little time for the plaintiff, the first defendant and the court to examine the second defendant’s written submissions served late on the night before. It was nevertheless immediately apparent that the second defendant sought to rely upon authority and court rules which were not merely misstated but, in some circumstances, imaginary. I am satisfied that all of the judgments and rules referred to in the submissions of 21 May 2025 were misstated, non-existent, or both, and that Gen AI had been used to prepare these submissions. An example was the citation of a decision of the Supreme Court of New South Wales described as “Wu v Wilks” (I will not provide the citation given in full, as there is a risk of it being picked up as genuine by other Gen AI: Luck v Secretary, Services Australia [2025] FCAFC 26 at [14]). There is no decision with this name, either in the Supreme Court of New South Wales or in any other jurisdictions. The caselaw citation given for “Wu v Wilks” belonged to a judgment on wholly unrelated material and the principles of law for which it was cited. All of the citations suffered similar problems. I drew these issues to the attention of the second defendant and enquired whether she had used Gen AI in the preparation of her submissions and, if so, whether she was aware of the Practice Note. She acknowledged that she had done so but said this was because she had very little time to provide submissions in reply and was deeply distressed by these proceedings" |
||||||||
J.R.V. v. N.L.V. | Supreme Court of British Columbia (Canada) | 19 June 2025 | Pro Se Litigant | Unidentified | Fabricated citations | Costs to the claimant in the amount of $200. | 200 CAD | |
In the case of J.R.V. v. N.L.V., the respondent, appearing in person, used a generative AI tool to prepare parts of her written argument. This resulted in the inclusion of citations to non-existent cases, known as 'hallucinations.' The claimant sought costs due to the need to research and respond to these false citations. The court acknowledged the issue but noted that the respondent was not represented by counsel and was unaware of the AI's capability to generate false citations. Moreover, the claimant was wrong as to the alleged non-existence of some citations. The court ordered the respondent to pay $200 in costs to the claimant. |
||||||||
In re Marriage of Isom and Kareem | Illinois CA (USA) | 16 June 2025 | Pro Se Litigant | Implied | Fabricated citation(s) | The appeal was denied, and the trial court's decision was affirmed. | — | |
Gbolahan Kareem, representing himself, appealed a decision denying his motion to reduce child support payments. His appeal was based on a claimed substantial change in circumstances due to the expiration of his work authorization. However, his appellate brief contained incorrectly cited cases, suggesting reliance on unreliable sources, possibly an AI tool like ChatGPT. The court noted the incorrect citation of 'In re Marriage of Zells' and the inability to locate 'In re Marriage of Amland', indicating fabricated citations. The court affirmed the trial court's decision, citing Gbolahan's failure to provide a complete record or valid legal authority, and presumed the trial court applied the correct law. |
||||||||
Attorney General v. $32,000 in Canadian Currency | Ontario SCJ (Canada) | 16 June 2025 | Pro Se Litigant | Implied | Two fabricated citations | Warning | — | |
"[49] Mr. Ohenhen submitted a statement of legal argument to the court in support of his arguments. In those documents, he referred to at least two non-existent or fake precedent court cases, one ostensibly from the Court of Appeal for Ontario and another ostensibly from the British Columbia Court of Appeal. In reviewing his materials after argument, I tried to access these cases and was unable to find them. I asked the parties to provide them to me. [50] Mr. Ohenhen responded with a “clarification”, providing different citations to different cases. I asked for an explanation as to where the original citations came from, and specifically, whether they were generated by artificial intelligence. I have received no response to that query. [51] While Mr. Ohenhen is not a lawyer with articulated professional responsibilities to the court, every person who submits authorities to the court has an obligation to ensure that those authorities exist. Simple CanLII searches would have revealed to Mr. Ohenhen that these were fictitious citations. Putting fictitious citations before the court misleads the court. It is unacceptable. Whether the cases are put forward by a lawyer or self-represented party, the adverse effect on the administration of justice is the same. [52] Mr. Ohenhen’s failure to provide a direct and forthright answer to the court’s questions is equally concerning. [53] Court processes are not voluntary suggestions, to be complied with if convenient or helpful to one’s case. The proper administration of justice requires parties to respect the rules and proceed in a forthright manner. That has not happened here. [54] I have not attached any consequences to this conduct in this case. However, should such conduct be repeated in any court proceedings, Mr. Ohenhen should expect consequences. Other self-represented litigants should be aware that serious consequences from such conduct may well flow." |
||||||||
Couvrette v. Wisnovsky | Oregon (USA) | 14 June 2025 | Lawyer | Unidentified | Fifteen non-existent cases and misrepresented quotations from seven real cases | Order to Show Cause re:Sanctions | — | |
Counsel said that "The inclusion of inaccurate citations was inadvertent and the result of reliance on an automated legal citation tool." |
||||||||
Taylor v. Cooper Power & Lighting Corp. | E.D.N.Y. (USA) | 13 June 2025 | Lawyer | Implied | One fabricated citation | Warning | — | |
"In Rutella's reply in support of his motion to vacate, he cites Green v. John H. Streater, Jr., 666 F.2d 119 (3d Cir. 1981). DE [69-1] at 8. When the Court was unable to locate Green or any case resembling it, the Court instructed Rutella's attorney, Kevin Krupnick, to either submit a copy of the case or show cause why he should not be sanctioned. See Electronic Order dated May 22, 2025. Krupnick admitted that he fabricated the Green case and claimed that he used it as a “placeholder” in a draft. DE [70-1]. It is implausible that an attorney would cite a case as specific as “Green v. John H. Streater, Jr., 666 F.2d 119 (3d Cir. 1981)” – which Krupnick admits does not exist – as a “placeholder” that he intended to replace. This is particularly true here, as Plaintiff had already cited the case that he subsequently claimed he intended to use. See DE [69-1] at 3 (citing Peralta v. Heights Med. Ctr., 485 U.S. 80 (1988)). Although Krupnick's conduct raises questions of his adherence to Fed. R. Civ. P. 11 (as he himself concedes), given the Court's recommendation that Rutella's motion to vacate be denied, the Court declines to recommend further action with respect to Rutella's misleading submission. " |
||||||||
Rochon Eidsvig & Rochon Hafer v. JGB Collateral | Texas CA (USA) | 12 June 2025 | Lawyer | Implied | Four fabricated cases | 8 mandatory hours of Continuous Legal Education on ethics and AI | — | |
"Regardless of whatever resources are used to prepare a party’s brief, every attorney has an ongoing responsibility to review and ensure the accuracy of filings with this and other courts. This includes checking that all case law cited in a brief actually exists and supports the points being made. It is never acceptable to rely on software or technology—no matter how advanced—without reviewing and verifying the information. The use of AI or other technology does not excuse carelessness or failure to follow professional standards. Technology can be helpful, but it cannot replace a lawyer’s judgment, research, or ethical responsibilities. The practice of law changes with the use of new technology, but the core duties of competence and candor remain the same. Lawyers must adapt to new tools without lowering their standards." |
||||||||
Department of Justice v Wise | Queensland Civil and Administrative Tribunal (Australia) | 11 June 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | |
The second respondent, Carly Dakota Wise, a self-represented litigant, filed an application for the recusal of a tribunal member, alleging bias and procedural unfairness. The application was based on several grounds, including fabricated legal citations - despite Ms. Wise having been warned in interlocutory proceedings to check the authorities she relied on. The court cited the local Guidelines for the Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers, available here, to stress that self-represented litigants need to check the accuracy of their pleadings. |
||||||||
Reed v. Community Health Care | W.D. Washington (USA) | 10 June 2025 | Pro Se Litigant | Implied | Fabricated citations, false quotes | Warning | — | |
" Plaintiffs identify fictitious quotes and citations in their briefing to support their arguments. For example, Plaintiffs purport to quote language from S.H. Hold v. United States, 853 F.3d 1056, 1064 (9th Cir. 2017) and Green v. United States, 630 F.3d 1245, 1249 (9th Cir. 2011). (Reed 2, Dkt. No. 23 at 6.) However, the quoted language is nowhere found in those cases. Plaintiffs also cite to a case identified as “Horne v. Potter, 557 F.3d 953, 957 (9th Cir. 2009).” (Id. at 7.) That citation, however, is for a case titled Bell v. The Hershey Company. Plaintiffs appear to acknowledge they offered fictitious citations. (See Reed 2, Dkt. No. 27.) Plaintiffs are cautioned that providing fictitious cases and quotes will lead to sanctions. " |
||||||||
Rodney Chagas v. Fabricio Petinelli Vieira Coutinho | Parana State (Brazil) | 9 June 2025 | Lawyer | Unidentified | Misrepresentation of a precedent | Monetary fine (1% of case value) | — | |
In this rebuttal, the lawyer cited jurisprudence that the presiding judge (Relator) found to be "impressively so delineated and harmonious with the case". This prompted the judge to investigate the precedent more closely. He discovered that while the case number indicated was real, it belonged to a completely different case unrelated to the legal matter being discussed, leading to the suspicion of an AI "hallucination. The court concluded: "It is totally inconceivable to imagine that the Judiciary, already so burdened with countless lawsuits, needs to investigate all the case law set forth in the legal grounds reported by the parties in the procedural documents, despite the duty to act in good faith set forth in Article 5 of the CPC. After all, it is entirely based on the principle of trust expectation that all subjects act in accordance with existing and valid rules. Thus, even if the appellant claims that such conduct was the result of an error, a claim that has not been satisfactorily proven, but which is taken as a premise for the purposes of argumentation, in the present case, it would be, at the very least, an inexcusable, gross error resulting from serious misconduct, ruling out the possibility of proceeding without any implications in this judicial field, so as not to allow any hesitation in considering the aforementioned conduct as being clearly litigious in bad faith. Thus, as a result of having acted in a manifestly reckless manner (Art. 80, V of the CPC), I condemn the appellant for litigation in bad faith, and he must pay the fine set at 1% of the value of the case (Art. 81 of the CPC), in accordance with the grounds" Translated with DeepL.com (free version) |
||||||||
Chen v. Vana et al. | Haifa Magistrate's Court (Israel) | 8 June 2025 | Lawyer | Unidentified | Fabricated legal citations | Appeal was ultimately dismissed on merits; Monetary sanction | 3000 ILS | |
The appellant, a lawyer representing himself in an appeal, submitted a brief that included non-existent legal citations. An employee in the lawyer's office had used an AI system to assist in drafting the document. After the opposing counsel and the court itself were unable to locate the cited precedents, the lawyer admitted they were AI-generated and a "good-faith mistake" for which he took full responsibility. While citing its authority to dismiss the appeal entirely due to the submission of non-existent sources, the court chose a "softer" sanction. It proceeded to hear the case, ultimately dismissing the appeal on its merits and imposing a separate monetary penalty payable to the State Treasury for the misconduct. |
||||||||
Ayinde v. Haringey & Al-Haroun v. QNB | High Court (UK) | 6 June 2025 | Lawyer | Unidentified | Fabricated citations (including a citation attributed to the judge herself) | No contempt, but referral to professional bodies | — | |
This judgment, delivered on 6 June 2025 by the Divisional Court of the King's Bench Division, addresses two cases referred under the court's Hamid jurisdiction, which concerns the court's power to enforce duties lawyers owe to the court. Both cases involve lawyers submitting written arguments or evidence containing false information, specifically non-existent case citations, generated through the use of artificial intelligence without proper verification. The Court used this opportunity to issue broader guidance on the use of AI in legal practice, raising concerns about the competence, training, and supervision of lawyers. |
||||||||
Goins v. Father Flanagan's Boys Home | D. Nebraska (USA) | 5 June 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresented authorities | Warning | — | |
" This Court's local rules permit the use of generative artificial intelligence programs, but all parties, including pro se parties, must certify “that to the extent such a program was used, a human signatory of the document verified the accuracy of all generated text, including all citations and legal authority,” NECivR 7.1(d)(4)(B). The plaintiff's brief contains no such certification, nor does the plaintiff deny using artificial intelligence. See filing 27 at 9.
|
||||||||
Lipe v. Albuquerque Public Schools | D. New Mexico (USA) | 4 June 2025 | Lawyer | Implied | Fabricated Citations | Show cause proceedings | — | |
Court noted that Counsel was still citing fabricated authorities, even though show cause proceedings are ongoing in parallel. |
||||||||
Powhatan County School Board v. Skinger et al | E.D. Virginia (USA) | 2 June 2025 | Pro Se Litigant | ChatGPT | Fabricated citations | Relevant motions stricken | — | |
"The pervasive misrepresentations of the law in Lucas' filings cannot be tolerated. It serves to make a mockery of the judicial process. It causes an enormous waste of judicial resources to try to find cited cases that do not exist and to determine whether a cited authority is relevant or binding, only to determine that most are neither. In like fashion, Lucas' adversaries also must run to ground the nonexistent cases or address patently irrelevant ones. The adversaries must thus incur needless legal fees and expenses caused by Lucas' pervasive citations to nonexistent or irrelevant cases. [...] However, as previously noted Lucas appears to be judgment proof so monetary sanctions likely will not deter her from the abusive practices reflected in her filings and in her previously announced, consistently followed, abuse of the litigation proceedings created by the Individuals with Disabilities Education Act, 20 U.S.C. § 1400, et seq. (“IDEA”). So, the Court must find some other way to protect the interests of justice and to deter Lucas from the abuses which have come to mark her approach to participation as a defendant in the judicial process. In this case, the most appropriate remedy is to strike Lucas' filings where they are burdensome by virtue of volume and exceed permitted page limits, where they are not cogent or understandable (when given the generous latitude afforded pro se litigants), and where they misrepresent the law by citing nonexistent or utterly irrelevant cases." In a subsequent Opinion, the court declined to reconsider or review its findings, pointing out that: "To begin, it is unclear what Lucas means by "contested" citations. The citations that the Court found to not exist are not "contested." They simply do not exist. There is no contesting that fact because the Court checked each citation that was referenced in its MEMORANDUM OPINION, exactly as Lucas cited them (and through other research means), and could not find any citation that matched what Lucas cited. That research demonstrates that the Court's findings are, in fact, supported rather than "[u]nsupported." Id. Then, in no way did the Court "wrongly assume[]" that these citations to nonexistent legal authority were "'fabricated' due to the use of generative AI." Id. The Court meticulously checked every citation that it held did not exist in those decisions. Those decisions were not based on "assumptions" but, instead, on the fact that either (1) no case existed under the reporter citation, case name, or quotation that Lucas used, or (2) a case with the reporter citation did exist but was to an entirely different case than the one cited by Lucas and had no relevancy to the issues of this case. ECF No. 170, at 520. And, there was no incorrect assumption that those nonexistent legal authorities were generated, hallucinated, or fabricated by AI because Lucas admitted, on the record, to using AI when writing her filings with the Court. The fact that her citations to nonexistent legal authority are so pervasive, in volume and in location throughout her filings, can lead to only one plausible conclusion: that an AI program hallucinated them in an effort to meet whatever Lucas' desired outcome was based on the prompt that she put into the AI program. As the Court described in its MEMORANDUM OPINION, this is becoming an alarmingly prevalent occurrence common to AI programs. Id. at 23-26. It is exceedingly clear that it occurred here. [...] The MOTION also complains that the Court did not give Lucas an "opportunity to verify or correct citations." Id. Wholly apart from the fact that it is the litigant's (pro se or represented) burden to verify citations, there is no reason to have accorded Lucas the opportunity to verify because the problem was extensive and pervasive across at least six filings. Moreover, the Court actually did what should have been done before the MOTION was filed by determining that those citations do not exist. No further verification is necessary. And, after a diligent search, if the Court could not find the legal authorities that Lucas purported to rely upon and present as real and binding, it is a folly to believe that Lucas' efforts at "correction" would have returned anything different. Further, she could have taken the opportunity in this MOTION to go through—citation by citation—and "verify" or "correct" them to demonstrate to the Court that its findings were, in fact, incorrect, rather than just baldly and without evidence claiming them to be so. She did not do that." |
||||||||
Ivins v KMA Consulting Engineers & Ors | Queensland Industrial Relations Commission (Australia) | 2 June 2025 | Pro Se Litigant | Unidentified | Fabricated citations | Relevant Submissions ignored | — | |
The Complainant, who was self-represented, used artificial intelligence to assist in preparing her submissions, which included fictitious case citations. The Commission noted the potential seriousness of relying on fabricated citations but did not impose any sanctions on the Complainant, as she was self-represented. The judge held: "In relation to the issue of the Complainant's reference to what appears to be fictitious case authorities, this is potentially a serious matter because it can be viewed as an attempt to mislead the Commission. If an admitted legal practitioner were to do this, there would be grounds to refer the practitioner to the Legal Services Commission for an allegation of misconduct. [...] Given that the Complainant is self-represented, I intend to take the same approach that I adopted in Goodchild and simply afford that part of the Complainant's submissions that deals with the two authorities no weight in determining the two applications. " |
||||||||
Ploni v. Wasserman et al. | Small Claims Court (Israel) | 1 June 2025 | Pro Se Litigant | ChatGPT; Google Search | Two Fabricated Citations | Monetary Fine | 250 ILS | |
" Directing the Court to nonexistent authorities wastes the Court’s time, a resource meant to serve the public, not be monopolized by a single litigant making baseless arguments. " |
||||||||
Andersen v. Olympus as Daybreak | D. Utah (USA) | 30 May 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresentation of past cases | Warning | — | |
In an earlier decision, the court had already warned the plaintiff against "any further legal misrepresentations in future communications". |
||||||||
Delano Crossing v. County of Wright | Minnesotta Tax Court (USA) | 29 May 2025 | Lawyer | Unidentified | Five fabricated citations | Breach of Rule 11, but no monetary sanction warranted; referred counsel to Lawyers Professional Responsibility Board | — | |
AI UseAttorneys for Wright County submitted a memorandum in support of a motion for summary judgment that contained five case citations generated by artificial intelligence; these citations did not refer to actual judicial decisions. Much of the brief appeared to be AI-written. The attorney who signed and filed the brief, acknowledged that the cited authorities did not exist and that much of the brief was drafted by AI. Ruling/SanctionThe Court found Counsel's conduct violated Rule 11.02(b) of the Minnesota Rules of Civil Procedure, as fake case citations cannot support any legal claim and there's an affirmative duty to investigate the legal underpinnings of a pleading. The Court found no merit in Counsel's defense, noting that the substitute cases she offered did not support the legal contentions in the brief, and the brief demonstrated a fundamental misunderstanding of legal standards. The Court did not find her insinuation that another, accurate motion document existed to be credible. Although the Court considered summarily denying the County's motion as a sanction, it ultimately denied the motion on its merits in a concurrent order because the arguments were so clearly incorrect. The Court declined to order further monetary sanctions, believing its Order to Show Cause and the current Order on Sanctions were sufficient to deter Counsel from relying solely on AI for case citations or legal conclusions in the future. However, the Court referred the matter concerning Counsel's conduct to the Minnesota Lawyers Professional Responsibility Board for further review, as the submission of an AI-generated brief with fake citations raised questions regarding her honesty, trustworthiness, and fitness as a lawyer. |
||||||||
Anita Krishnakumar et al. v. Eichler Swim and Tennis Club | CA SC (USA) | 29 May 2025 | Lawyer | Implied | Fabricated citation and quotes | Argument lost on the merits in tentative ruling | — | |
The underlying motion was later withdrawn, with the result that the tentative ruling was not adopted. |
||||||||
Mid Cent. Operating Eng'rs Health v. Hoosiervac | S.D. Ind. (USA) | 28 May 2025 | Lawyer | Unidentified | 3 fake case citations | Monetary Sanction | 6000 USD | |
(Earlier report and recommendation can be found here.) AI UseCounsel admitted at a show cause hearing that he used generative AI tools to draft multiple briefs and did not verify the citations provided by the AI, mistakenly trusting their apparent credibility without checking. Hallucination DetailsThree distinct fake cases across filings. Each was cited in a separate brief, with no attempt at Shepardizing or KeyCiting. Ruling/SanctionThe Court recommended a $15,000 sanction ($5,000 per violation), with the matter referred to the Chief Judge for potential additional professional discipline. Counsel was also ordered to notify Hoosiervac LLC’s CEO of the misconduct and file a certification of compliance. Eventually, the court fined Counsel $6,000, stressing that this was sufficient. Key Judicial ReasoningThe judge stressed that "It is one thing to use AI to assist with initial research, and even nonlegal AI programs may provide a helpful 30,000-foot view. It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented. Confirming a case is good law is a basic, routine matter and something to be expected from a practicing attorney. As noted in the case of an expert witness, an individual's "citation to fake, AI-generated sources . . . shatters his credibility." See Kohls v. Ellison, No. 0:24-cv-03754-LMP-DLM, Doc. 46 at *10 (D. Minn. Jan. 10, 2025)." |
||||||||
Ko v. Li | Ontario SCJ (Canada) | 28 May 2025 | Lawyer | ChatGPT | At least 3 fabricated citations and misstatements of law with misleading hyperlinks | Plaintiff’s application dismissed; no costs imposed; court warns against future use of generative AI without verification | — | |
(Order to show cause is here.) Hallucination DetailsThe applicant’s factum included citations to:
The judge noted these citations bore “hallmarks of an AI response” and described the conduct as possibly involving “hallucinations” from generative AI. The court ordered counsel to appear to explain whether she knowingly relied on AI and failed to verify the content. No clarification or correction was received from counsel after the hearing. Ruling/SanctionAt the end of the show cause proceedings, Justice Myers noted that, due to the media reports about this case, the goals of any further contempt proceedings were already met, including: "maintaining the dignity of the court and the fairness of civil justice system, promoting honourable behaviour by counsel before the court, denouncing serious misconduct, deterring similar future misconduct by the legal profession, the public generally, and by Ms. Lee specifically, and rehabilitation". The judge therefore declined to impose a fine or to continue the contempt proceedings, on the condition that Counsel undertakes Continuing Professional Development courses (as she said she would), and does not bill her client for any unrelated work (which was helped by the fact that she had so far been working pro bono). |
||||||||
Brick v. Gallatin County | D. Montana (USA) | 27 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | |
GNX v. Children's Guardian | NSW (Australia) | 27 May 2025 | Pro Se Litigant | ChatGPT | One misrepresented precedent | Warning for continuation of proceedings | — | |
After the Applicant confessed having relied on ChatGPT for the written phase of the proceedings, hearing was adjourned and the Court "cautioned the Applicant on relying on ChatGPT for legal advice and suggested that the Applicant may wish to seek legal advice from a lawyer about his application." The court ultimately found that one of the authorities illustrated "the risk in relying on ChatGPT and Generative AI for legal advice. The Applicant’s description of the decision in this case in his submissions, for which he used ChatGPT to prepare, is clearly wrong. " |
||||||||
So-and-so v. v. Anonymous | Israel (Israel) | 26 May 2025 | Lawyer | Implied | Fabricated citations | AI use was noted by the lower court; no specific sanction for it | — | |
The Family Court noted that one motion cited case law that does "not exist at all". This raised "concern about uncontrolled use of artificial intelligence technology," referencing recent Supreme Court guidance on the need for an appropriate judicial response to such instances. On appeal, the District Court acknowledged the Family Court's finding regarding the non-existent case law and the suspicion of AI use. However, like the Family Court, it did not impose a separate sanction for this, as the appeal was dismissed primarily on the grounds of the delay and lack of merit concerning the protocol correction itself |
||||||||
R. v. Chand | Ontario (Canada) | 26 May 2025 | Lawyer | Implied | Fabricated citations, and misrepresented authorities | Warning and Directions for Remainder of case | — | |
Mahala Association (מהל"ה) v. Clalit Health Services et al. | Israel | 26 May 2025 | Lawyer | Tachdin.AI | Multiple non-existent Supreme Court and District Court decisions, misattributed quotations, and fictitious citations | Class action petition struck from the record; finding that Counsel was not fit to act in this case; Monetary sanctions | 50000 ILS | |
AI UseCounsel admitted that incorrect citations arose from reliance on an AI-enabled database called “Takdin AI.” The tool generated incorrect references to multiple Supreme Court decisions and falsely cited them as supporting key propositions. Counsel claimed the errors stemmed from time pressure and good faith, but the Court found the explanation inadequate. Hallucination DetailsAt least 8 citations were found to be fictitious or unrelated to the argument, including:
The hallucinated citations were used in response to motions to dismiss and as the basis for substantive legal claims in the class certification request. Ruling/SanctionThe Court:
Key Judicial ReasoningThe Court emphasized that the inclusion of hallucinated sources—regardless of intent—subverted proper legal process. Citations must be verified, and AI does not absolve attorneys from professional responsibility. The systemic risks posed by hallucinated filings necessitate a firm response going forward |
||||||||
Vechtel et al. v. Gershoni | Israel (Israel) | 25 May 2025 | Pro Se Litigant | Implied | Fictitious quotation to the judge's own previous judgments | Motion denied | — | |
The judge pointed out that the plaintiffs' written request included what appeared to be a direct quote from one of her own previous judgments. However, upon examination, she found that not only did this quote not exist in the cited judgment, but the judgment itself did not even address the legal question at stake. |
||||||||
Luther v. Oklahoma DHS | W.D. Oklahoma (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | |
" The Court has serious reason to believe that Plaintiff used artificial intelligence tools to assist in drafting her objection. While the use of such tools is not prohibited, artificial intelligence often cites to legal authorities, like Cabrera, that do not exist. Continuing to cite to non-existent cases will result in sanctions up to and including dismissal. " |
||||||||
Rotonde v. Stewart Title Insurance Company | New York (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | |
Concord v. Anthropic | N.D. California (USA) | 23 May 2025 | Expert | Claude.ai | Fabricated attribution and title for (existing) article | Part of brief was struck; court took it into account as a matter of expert credibility | — | |
Counsel's explanation of what happened can be found here. |
||||||||
Source: Volokh | ||||||||
Zherka v. Davey et al. | Massachusetts (USA) | 22 May 2025 | Pro Se Litigant | Unidentified | Fabricated citations | Motion struck, with leave to refile | — | |
After being ordered to show cause, plaintiff admitted having used AI for several filings. |
||||||||
Garner v. Kadince | Utah C.A. (USA) | 22 May 2025 | Lawyer | ChatGPT | Fabricated Legal Authorities | 1000 USD | ||
AI UseThe fabricated citations originated from a ChatGPT query submitted by an unlicensed law clerk at Petitioner's law firm. Neither Counsel reviewed the petition’s contents before filing. The firm had no AI use policy in place at the time, though they implemented one after the order to show cause was issued. Hallucination DetailsChief among the hallucinations was Royer v. Nelson, which Respondents demonstrated existed only in ChatGPT’s output and in no official database. Other cited cases were also inapposite or unverifiable. Petitioner’s counsel admitted fault and stated they were unaware AI had been used during drafting. Ruling/SanctionThe court issued three targeted sanctions:
Key Judicial ReasoningThe panel (Per Curiam) emphasized that the conduct, while not malicious, still diverted judicial resources and imposed unnecessary burdens on the opposing party. Unlike Mata or Hayes, the attorneys in this case quickly admitted the issue and cooperated, which the court acknowledged. Nonetheless, the submission of fabricated law—especially under counsel's signature—breaches core duties of candor and verification, warranting formal sanctions. The court warned that Utah’s judiciary cannot be expected to verify every citation and must be able to trust lawyers to do so |
||||||||
Evans et al v. Robertson et al | E.D. Michigan (USA) | 21 May 2025 | Pro Se Litigant | Implied | Non-existent or misrepresented cases | Warning | — | |
Happiness Idehen & Felix Ogieva v. Gloria Stoute-Phillip | N.Y. Civil Court (USA) | 21 May 2025 | Lawyer | Implied | At least 7 fabricated citations | Show cause proceedings that might lead to sanctions | — | |
Bauche v. Commissioner of Internal Revenue | US Tax Court (USA) | 20 May 2025 | Pro Se Litigant | Implied | Nonexistent cases | Warning | — | |
" While in our discretion we will not impose sanctions on petitioner, who is proceeding pro se, we warn petitioner that continuing to cite nonexistent caselaw could result in the imposition of sanctions in the future. " |
||||||||
Robert Lafayette v. Alex Abrami et al | Vermont SC (USA) | 20 May 2025 | Pro Se Litigant | Implied | Fictitious citations | Order to show cause | — | |
Versant Funding v. Teras Breakbulk Ocean Navigation Enterprises | S.D. Florida (USA) | 20 May 2025 | Lawyer | Unidentified | 1 fabricated citation | Joint and several liability for Plaintiff’s attorneys' fees and costs incurred in addressing the hallucinated citation; CLE requirement on AI ethics; Monetary fines | 1500 USD | |
AI UseFirst Counsel, who had not previously used AI for legal work, used an unspecified AI tool to assist with drafting a response. He failed to verify the citation before submission. Second Counsel, as local counsel, filed the response without checking the content or accuracy, even though he signed the document. Second Counsel then said that he had initiated "procedural safeguards to prevent this error from happening again by ensuring he, and local counsel, undertake a comprehensive review of all citations and arguments filed with this and every court prior to submission to ensure their provenance can be traced to professional non-AI sources." Hallucination DetailsThe hallucinated case was cited as controlling Delaware authority on privilege assignments. When challenged by Plaintiff, Defendants initially filed a bare withdrawal without explanation. Only upon court order did they disclose the AI origin and acknowledge the error. Counsel personally apologized to the court and opposing counsel. Ruling/SanctionJudge William Matthewman imposed a multi-part sanction:
The Court emphasized that the submission of hallucinated citations—particularly when filed and signed by two attorneys—constitutes reckless disregard for procedural and ethical obligations. Though no bad faith was found, the conduct was sanctionable under Rule 11, § 1927, the Court’s inherent authority, and local professional responsibility rules. Key Judicial ReasoningThe Court distinguished this case from more egregious incidents (O’Brien v. Flick, Thomas v. Pangburn) because the attorneys admitted their error and did not lie or attempt to cover it up. However, the delay in correction and failure to check the citation in the first place were serious enough to warrant monetary penalties and educational obligations. |
||||||||
Ehrlich v. Israel National Academy of Sciences et al. | Israel (Israel) | 18 May 2025 | Pro Se Litigant | Unidentified | Fake citations | Request dismissed on the merits, monetary sanction | 500 ILS | |
Applicant sought an administrative court order to force the Israel National Academy of Sciences to let him speak at a conference on "Artificial Intelligence and Research: Uses, Prospects, Dangers". This was dismissed, with the court adding: " I will add this: As mentioned above, the subject of the conference where the applicant wishes to speak concerns, among other things, the dangers of artificial intelligence. Indeed, one of these dangers materialized in the applicant's request: He, who is not represented, stated clearly and fairly that he used artificial intelligence for his request. An examination of the request shows that it consequently suffered from 'AI hallucinations' – it mentioned many "judgments" that never came into existence (Regarding this problem, see: HCJ 38379-12-24 Anonymous v. The Sharia Court of Appeals Jerusalem, paragraphs 13-12 (23.2.2025) (hereinafter: the Anonymous matter); HCJ 23602-01-25 The Association for the Advancement of Dog Rights v. The Minister of Agriculture, paragraphs 12-11 (28.2.2025) (hereinafter: the Association matter); and regarding the mentioned problem and the possibility of participating in the conference, see: Babylonian Talmud, Gittin 43a). Just recently, this Court warned, in no uncertain terms, that alongside the blessings of artificial intelligence, one must take excellent care against its pitfalls; 'Eat its inside, throw away its peel' (Anonymous matter, paragraph 26; Association matter, paragraph 19). The applicant did state, clearly, that he used artificial intelligence, and in light of this, he further requested that if a 'technical' error occurred under his hand – it should be seen as a good-faith mistake, not to be held against him. I cannot accept such a request. It does not cure the problems of hallucinating artificial intelligence. Those addressing this Court, whether represented or unrepresented alike, bear the burden of examining whether the precedents they refer to - which are not a 'technical' matter, but rather the beating heart of the pleadings - indeed exist, and substantiate their claims. For this reason too - the request must be dismissed" (Translation by Gemini 2.5). |
||||||||
Keaau Development Partnership LLC v. Lawrence | Hawaii ICA (USA) | 15 May 2025 | Lawyer | Implied | One non-existent case with misattributed pinpoint citations from unrelated real cases | Monetary sanction against counsel personally; no disciplinary referral | 100 USD | |
AI UseCounsel filed a motion to dismiss appeal that cited “Greenspan v. Greenspan, 121 Hawai‘i 60, 71, 214 P.3d 557, 568 (App. 2009).” The court found that:
Ruling/Sanction
The amount reflects counsel’s candor and corrective measures, but the court noted that federal courts have imposed higher sanctions in similar cases. |
||||||||
Beenshoof v. Chin | W.D. Washington (USA) | 15 May 2025 | Lawyer | Implied | One non-existent case | No sanction imposed; court reminded Plaintiff of Rule 11 obligations | — | |
AI UseThe plaintiff, proceeding pro se, cited “Darling v. Linde, Inc., No. 21-cv-01258, 2023 WL 2320117 (D. Or. Feb. 28, 2023)” in briefing. The court stated it could not locate the case in any major legal database or via internet search and noted this could trigger Rule 11 sanctions if not based on a reasonable inquiry. The ruling cited Saxena v. Martinez-Hernandez as a cautionary example involving AI hallucinations, suggesting the court suspected similar conduct here. |
||||||||
USA v. Burke | M.D. Florida (USA) | 15 May 2025 | Lawyer | Westlaw's AI tools, GPT4.5 Deep Research (Pro) | Multiple fake citations and misquotations | Motion dismissed, and plaintiff ordered to refile it without fake citations. | — | |
Counsel later explained how the motion came to be: see here. |
||||||||
Fox v. Assum | Israel (Israel) | 14 May 2025 | Lawyer | Unidentified | Reference to a seemingly fictitious judgment | No formal sanction; request by court for explanation; partial costs awarded against the defendant | 1200 ILS | |
AI UseIn a filing related to a third-party notice, the defendant cited a judgment that did not exist. The judge clarified that this was not simply a mistaken citation or party confusion, but rather a reference to an entirely fictional judgment. The court explicitly stated: “It is not clear how such an error occurs, except through the use of artificial intelligence.” Ruling/SanctionThe court permitted the defendant to proceed with the third-party notice but ordered partial costs (₪1,200) to be paid to the plaintiff due to procedural irregularities. The judge demanded a formal explanation of how the fictitious citation was introduced, in order to prevent recurrence Key Judicial ReasoningWhile the procedural error did not warrant barring the defendant’s claim against a third party, the court emphasized that referencing a fictional legal source is a serious issue requiring scrutiny. The opinion signals a growing judicial intolerance for unverified AI-assisted legal drafting in Israeli courts. |
||||||||
Bandla v. Solicitors Regulation Authority | UK (UK) | 13 May 2025 | Pro Se Litigant | Google Search (Allegedly) | At least 25 fabricated or non-existent case citations | Application for extension of time refused; appeal struck out as abuse of process; indemnity costs of £24,727.20 ordered; permission to appeal denied | 24727 GBP | |
AI UseBandla denied using AI, claiming instead to have relied on Google searches to locate “supportive” case law. He admitted that he did not verify any of the citations and never checked them against official sources. The court found this unacceptable, particularly from someone formerly admitted as a solicitor. Hallucination DetailsBandla’s submissions cited at least 27 cases which the Solicitors Regulation Authority (SRA) could not locate. Bandla maintained summaries and quotations from these cases in formal submissions. When pressed in court, he admitted having never read the judgments, let alone verified their existence. Ruling/SanctionThe High Court refused the application for an extension of time, finding Bandla’s explanations inconsistent and unreliable. The court independently struck out the appeal on grounds of abuse of process due to the submission of fake authority. It imposed indemnity costs of £24,727.20. The judge emphasized that even after being alerted to the fictitious nature of the cases, Bandla neither withdrew nor corrected them. Key Judicial ReasoningThe court found Bandla’s conduct deeply troubling, noting his previous experience as a solicitor and his professed commitment to legal standards. It held that the deliberate or grossly negligent inclusion of fake case law—especially in an attempt to challenge a disciplinary disbarment—was an abuse requiring strong institutional response. |
||||||||
Source: Natural & Artificial Intelligence in Law | ||||||||
Ramirez v. Humala | E.D.N.Y. (USA) | 13 May 2025 | Paralegal | Unidentified | Four fabricated federal and state case citations | Monetary sanction jointly imposed on counsel and firm; order to inform client | 1000 USD | |
AI UseA paralegal used public search tools and unspecified “AI-based research assistants” to generate legal citations. The resulting hallucinated cases were passed to Counsel, who filed them without verification. Four out of eight cited cases were found to be fictitious:
Ruling/SanctionThe court imposed a $1,000 sanction against Counsel and her firm. Counsel was ordered to serve the sanction order on her client and file proof of service. The court declined harsher penalties, crediting her swift admission, apology, and internal reforms. Key Judicial ReasoningThe court found subjective bad faith due to the complete absence of verification. It cited a range of other AI-related sanction decisions, underscoring that even outsourcing to a “diligent and trusted” paralegal is not a defense when due diligence is absent. |
||||||||
Source: Volokh | ||||||||
Jamisson Roriz de Santana Andrade v. Tribunal Superior do Trabalho | Bahia (Brazil) | 12 May 2025 | Lawyer | Implied use of MobiOffice's AI Assistant | Multiple fictitious citations to non-existent constitutional precedents | Case summarily dismissed; Counsel referred to the Bar Association; Claimant ordered to pay double the costs | — | |
AI UseThe petition’s pages were marked “Criado com MobiOffice.” The STF verified that MobiOffice includes a built-in AI writing assistant. Combined with the inclusion of fictitious citations, this led the Court to conclude that AI had been used and not reviewed. The judge characterized this as reckless conduct. Hallucination Details
Ruling/Sanction
|
||||||||
Newbern v. Desoto County School District et al. | N.D. Mississippi (USA) | 12 May 2025 | Pro Se Litigant | Implied | Fabricated case law | Case dismissed, in part as a sanction for fabrication of legal authorities | — | |
AI UseThe court found that several of the cases cited by the plaintiff in her briefing opposing Officer Hill’s qualified immunity defense did not exist. Although Newbern suggested the citations may have been innocent mistakes, she did not challenge the finding of fabrication. No AI tool was admitted or named, but the structure and specificity of the invented cases strongly suggest generative AI use. Hallucination DetailsThe fabricated authorities were not background references, but “key authorities” cited to establish that Hill’s alleged conduct violated clearly established law. The court observed that the fake cases initially appeared to be unusually on-point compared to the rest of plaintiff’s citations, which raised suspicion. Upon scrutiny, it confirmed they did not exist. Ruling/SanctionThe court dismissed the federal claims against Officer Hill as a partial sanction for plaintiff’s fabrication of legal authority and failure to meet the burden under qualified immunity. However, it declined to dismiss the entire case, citing the interest of the minor child involved and the relevance of potential state law claims. It permitted discovery to proceed on those claims to determine whether Officer Hill acted with malice or engaged in other conduct falling outside the scope of Mississippi Tort Claims Act immunity. Key Judicial ReasoningThe court found that plaintiff’s citation of fictitious cases undermined her effort to meet the demanding “clearly established” standard. It rejected her claim that the fabrication was an innocent mistake and viewed it in light of her broader litigation conduct, which included excessive filings and disregard for procedural limits. |
||||||||
Crypto Open Patent Alliance v. Wright (2) | UK (UK) | 12 May 2025 | Pro Se Litigant | Implied | Use of AI to generate verbose/repetitive documents; AI-generated false/non-existent legal citations | General Civil Restraint Order (GCRO) granted for 3 years; Case referred to Attorney General; Costs awarded to applicants. | 100000 GBP | |
AI UseDr. Wright, after beginning to represent himself, repeatedly used AI engines (such as ChatGPT or similar) to generate legal documents. These documents were characterized by the court as "highly verbose and repetitious" and full of "legal nonsense". This use of AI contributed to filings containing numerous false references to authority and misrepresentations of existing law. Hallucination DetailsWhile the core issue in Dr. Wright's litigation was his fundamental dishonesty (claiming to be Satoshi Nakamoto based on "lies and ... elaborately forged documents" ), the use of AI introduced specific problems. His appeal documents, bearing signs of AI creation, contained "numerous false references to authority". His later submissions also involved "citation of non-existent authorities". This AI-driven production of flawed legal arguments formed part of his broader pattern of disrespect for court rules and process. Ruling/SanctionMr Justice Mellor granted a General Civil Restraint Order (GCRO) against Dr. Wright for a three-year period. He found that an Extended CRO (ECRO) would be insufficient given the scope and persistence of Dr. Wright's abusive litigation. The court also referred Dr. Wright's conduct to the Attorney General for consideration of a civil proceedings order under s.42 of the Senior Courts Act 1981. Dr. Wright was ordered to pay the applicants' costs for the CRO application, summarily assessed at £100,000. Key Judicial ReasoningThe court found "overwhelming" evidence that Dr. Wright had persistently brought claims that were Totally Without Merit (TWM), numbering far more than the required threshold. This conduct involved extensive lies and forgeries across multiple jurisdictions and targeted individuals who often lacked the resources to defend themselves. The judge concluded there was a "very significant risk" that Dr. Wright would continue this abusive conduct unless restrained. The court noted his consistent contempt for court rules and processes, including his perjury, forgery, breach of orders, and flawed submissions (including those using AI). A GCRO was deemed just and proportionate to protect both potential future litigants and the finite resources of the court system |
||||||||
Case No. 72079-11-24 | Israel (Israel) | 12 May 2025 | Lawyer | Unidentified | Multiple fictitious court rulings | No immediate sanction imposed; the matter was referred to the Legal Department of the Court Administration for review and potential action, including referral to the Ethics Committee of the Israel Bar Association | — | |
AI UseCounsel explained that the hallucinated citations were included in a draft intended for personal legal research and learning, which was mistakenly filed with the court. This constituted an implicit admission that generative AI tools were involved. Ruling/SanctionWhile Judge Itay Katz did not impose personal costs, he referred the matter to the Legal Department of the Judicial Authority to determine whether further steps—including referral to the Israel Bar Association Ethics Committee—should be taken. The court emphasized this was done as a gesture of leniency with the hope that such behavior will not recur. Key Judicial ReasoningThe court referred to several other recent Israeli cases to underscore the growing recognition of AI hallucination risk in legal practice. It reiterated the requirement for attorneys to meticulously verify any citation before submission and warned that future similar instances may not receive such lenient treatment. |
||||||||
In re Thomas Grant Neusom | M.D. Florida (USA) | 8 May 2025 | Lawyer | Unidentified | Multiple fictitious or misrepresented case citations | Suspension from practice before the Middle District of Florida for one year; immediate prohibition on accepting new federal matters; conditional reinstatement | — | |
(Grievance Committee Report available here.) AI UseNeusom told the grievance committee that he “may have used artificial intelligence” in preparing filings, and that any hallucinated cases were not deliberately fabricated but may have come from AI tools. The filings in question included a notice of removal and a motion for summary judgment. The judge later noted a pattern of citations inconsistent with established case law and unsupported by known databases. Hallucination DetailsCitations included cases that either did not exist or were grossly mischaracterized. Notably:
Neusom failed to produce the full texts of the cited cases when requested and instead filed a 721-page exhibit in violation of court orders. Ruling/SanctionThe court adopted the grievance committee’s recommendation and imposed a one-year suspension. Neusom is prohibited from accepting new federal cases in the Middle District of Florida during the suspension and must:
Key Judicial ReasoningThe court found that Neusom violated Rules 4-1.3, 4-3.3(a)(3), 4-3.4(c), and 4-8.4(c) of the Florida Rules of Professional Conduct. His failure to verify AI-generated content, compounded by noncompliance with orders and false statements to opposing counsel, demonstrated a pattern of recklessness and dishonesty. The court emphasized that federal proceedings require a high standard of diligence and that invoking AI cannot excuse failure to meet professional obligations. |
||||||||
Source: Natural & Artificial Intelligence in Law | ||||||||
Israel v. Ibrahim Mahajneh | Israel (Israel) | 7 May 2025 | Prosecutor | Unidentified | Citation of a completely fictitious Israeli statute | No sanction imposed; judge criticized the error as a “disgrace”; granted partial relief to applicant | — | |
AI UseIn opposing the return of a seized mobile phone, the prosecution cited a non-existent statutory provision allegedly defining what qualifies as an “institutional computer.” The judge identified the law as fictional and attributed its creation to generative AI, noting that it does not appear in any legal database or government source. The court referred to this as a product “created by artificial intelligence.” Hallucination DetailsThe prosecution cited a statute regarding institutional computer definitions which, upon investigation, did not exist in Israeli law. The judge conducted internet and database searches to confirm its nonexistence. The judge criticized the error, remarking: “If I thought I had seen everything in 30 years on the bench, I was mistaken” Ruling/SanctionThe judge declined to sanction the prosecution but strongly rebuked the conduct, calling it embarrassing and improper. Key Judicial ReasoningThe judge stressed that citing phantom laws undermines public confidence and judicial efficiency. Even absent malice, reliance on fictitious AI-generated legal references is unacceptable. The judgment did not penalize the prosecution but underscored the need for due diligence and warned of reputational damage. |
||||||||
Matter of Raven Investigations & Security Consulting | GAO (USA) | 7 May 2025 | Pro Se Litigant | Unidentified | Multiple fabricated citations to prior GAO decisions | Warning | — | |
AI UseGAO requested clarification after identifying case citation irregularities. The protester confirmed that their representative was not a licensed attorney and had relied on a combination of public tools, AI-based platforms, and secondary summaries, which produced fabricated or misattributed citations. Hallucination DetailsExamples included:
The fabrications mirrored patterns typical of AI hallucinations. Ruling/SanctionAlthough the protest was dismissed on academic grounds, GAO addressed the citation misconduct. It did not impose sanctions in this case but warned that future submission of non-existent authority could lead to formal disciplinary action—including dismissal, cost orders, and bar referrals (in the case of attorneys). |
||||||||
Harris v. Take-Two Interactive Software | D. Colorado (USA) | 6 May 2025 | Pro Se Litigant | Implied | Fabricated case law and quotations | Warning | — | |
Court held that: "The use of fictitious quotes or cases in filings may subject a party, including a pro se party, to sanctions pursuant to Federal Rule of Civil Procedure 11 as “pro se litigants are subject to Rule 11 just as attorneys are.” |
||||||||
Rotonde v. Stewart Title Insurance Co | NY SC (USA) | 6 May 2025 | Pro Se Litigant | Implied | Several non-existent legal citations | Motion to dismiss granted in full; no sanction imposed, but court formally warned plaintiff | — | |
AI UseThe court observed that “some of the cases that plaintiff cites… do not exist,” and noted it had “tried, in vain,” to find them. While no explicit AI use is admitted by the plaintiff, the pattern and specificity of the fabricated citations are characteristic of LLM-generated hallucinations. Ruling/SanctionThe court dismissed all five causes of action—including negligence, tortious interference, aiding and abetting fraud, declaratory judgment, and breach of implied covenant of good faith and fair dealing—as either untimely or duplicative/deficient on the merits. It declined to impose sanctions but explicitly invoked Dowlah v. Professional Staff Congress, 227 AD3d 609 (1st Dept. 2024), and Will of Samuel, 82 Misc 3d 616 (Sur. Ct. 2024), to warn plaintiff that any future citation of fictitious cases would result in sanctions. Key Judicial ReasoningJustice Jamieson noted that while the court is “sensitive to plaintiff's pro se status,” that does not excuse disregard of procedural rules or the submission of fictitious citations. The court emphasized that its prior decision in related litigation in 2022 undermined plaintiff’s tolling claims, and that Executive Order extensions during the COVID-19 pandemic did not rescue otherwise-expired claims. The hallucinated citations failed to salvage plaintiff’s fraud and tolling theories, and their use was treated as an aggravating—though not yet sanctionable—factor. |
||||||||
X v. Board of Trustees of Governors State University | N.D. Illinois (USA) | 6 May 2025 | Pro Se Litigant | Implied | One fabricated citation | Warning | — | |
"For that principal [sic] [X] cites a case, Gunn v. McKinney, 259 F.3d 824, 829 (7th Cir. 2001), which neither defense counsel nor the Court has been able to locate. The Court reminds [X] that Federal Rule of Civil Procedure 11 applies to pro se litigants, and sanctions may result from such conduct, especially if the citation to Gunn was not merely a typographical or citation error but instead referred to a non-existent case. By presenting a pleading, written motion, or other paper to the Court, an unrepresented party acknowledges they will be held responsible for its contents. See Fed. R. Civ. P. 11(b)." |
||||||||
Lacey v. State Farm General Insurance | C.D. Cal (USA) | 6 May 2025 | Lawyer | CoCounsel, Westlaw Precision, Google Gemini | Nine citations incorrect or fabricated; multiple invented quotations from real or fictitious cases | Striking of briefs; denial of requested discovery relief; Large monetary sanctions jointly and severally against the two law firms | 31100 USD | |
AI UseCounsel used CoCounsel, Westlaw’s AI tools, and Google Gemini to generate a legal outline for a discovery-related supplemental brief. The outline contained hallucinated citations and quotations, which were incorporated into the filed brief by colleagues at both Ellis George and K&L Gates. No one verified the content before filing. After the Special Master flagged two issues, counsel refiled a revised brief—but it still included six AI-generated hallucinations and did not disclose AI use until ordered to respond. Hallucination DetailsAt least two cases did not exist at all, including a fabricated quotation attributed to Booth v. Allstate Ins. Co., 198 Cal.App.3d 1357 (1989). Misquoted or fabricated quotes attributed to National Steel Products Co. v. Superior Court, 164 Cal.App.3d 476 (1985). Several additional misquotes and garbled citations across three submitted versions of the brief. Revised versions attempted to silently “fix” errors without disclosing their origin in AI output. Ruling/SanctionThe Special Master (Judge Wilner) struck all versions of Plaintiff’s supplemental brief, denied the requested discovery relief, and imposed:
Key Judicial ReasoningThe submission and re-submission of AI-generated material without verification, especially after warning signs were raised, was deemed reckless and improper. The court emphasized that undisclosed AI use that results in fabricated law undermines judicial integrity. While individual attorneys were spared, the firms were sanctioned for systemic failure in verification and supervision. The Special Master underscored that the materials nearly made it into a judicial order, calling that prospect “scary” and demanding “strong deterrence.” |
||||||||
Flowz Digital v. Caroline Dalal | C.D. Cal (USA) | 5 May 2025 | Lawyer | Lexis+AI | Fabricated citation, and misrepresented precedents | Order to show cause | — | |
In their Response to the Order to show Cause, Counsel specified that they used Lexis+AI, and stressed that "LexisNexis itself has publicly emphasized the reliability of its Lexis+ AI platform, marketing it as providing “hallucination-free legal citations” specifically to avoid citation errors." |
||||||||
Lozano González v. Roberge | Housing Administrative Tribunal (Canada) | 1 May 2025 | Pro Se Litigant | ChatGPT | Fabricated norms (due to AI-based translation) | — | ||
The landlord sought to repossess a rental property, claiming the lease renewal was suspended based on a misinterpretation of Quebec's civil code articles. He used ChatGPT to translate these articles, which resulted in a completely different meaning. The Tribunal found the repossession request invalid as it was based on a date prior to the lease's end. The Tribunal rejected the claim of abuse, accepting the landlord's sincere belief in his misinterpretation, influenced by AI translation, and noted his language barrier and residence in Mexico. The Tribunal advised the landlord to seek reliable legal advice in the future. |
||||||||
Gustafson v. Amazon.com | D. Arizona (USA) | 30 April 2025 | Pro Se Litigant | Implied | One fake case | Warning | — | |
Nexgen Pathology Services Ltd v. Darcueil Duncan | (Trinidad & Tobago) | 30 April 2025 | Lawyer | Implied | 7 non-existent or misrepresented cases | Court referred the matter to the Disciplinary Committee | — | |
AI UseCounsel denied using AI directly and attributed the hallucinations to “Google and Google Scholar” searches by a junior research assistant. However, the court found the citation pattern highly characteristic of generative AI hallucinations, including plausible-sounding but non-existent authority names and improper formatting. Counsel acknowledged a lack of adequate supervision and admitted that the cited authorities were never verified nor included in the bundle. Hallucination DetailsSeven cited authorities were found to be fictitious or mischaracterized, including:
These were used to support the implied obligation to repay employer-sponsored training, the core issue of the case. None were available in legal databases or official archives, and no hard copies were ever submitted. Ruling/SanctionWhile the court awarded judgment for the Claimant on the breach of contract claim, it found the citation misconduct egregious and referred the matter to the Disciplinary Committee of the Law Association. The Court noted that hallucinated citations undermine judicial integrity and must be proactively prevented. Key Judicial ReasoningJustice Westmin James emphasized that lawyers must not submit unverifiable or fictitious authority, whether generated by AI or not. He underscored that the legal system depends on the accuracy of submissions, and that even unintentional use of hallucinated material violates the duty of candour and may constitute professional misconduct. |
||||||||
Source: Natural & Artificial Intelligence in Law | ||||||||
Moales v. Land Rover Cherry Hill | D. Connecticut (USA) | 30 April 2025 | Pro Se Litigant | Unidentified | Misrepresentation of several key federal securities law precedents | Plaintiff warned to ensure accuracy of future submissions | — | |
AI UseThe court stated that “Moales may have used artificial intelligence in drafting his submissions,” citing widespread concerns over AI hallucination. It noted that several citations in his complaint and show-cause response were plainly incorrect or irrelevant. While Moales did not admit AI use, the court cited Strong v. Rushmore Loan Mgmt. Servs., 2025 WL 100904 (D. Neb.) and Mata v. Avianca to contextualize its concern. Hallucination DetailsCited Ernst & Ernst v. Hochfelder, 425 U.S. 185 (1976), and S.E.C. v. W.J. Howey Co., 328 U.S. 293 (1946) as supporting the existence of a federal common law fiduciary duty—an inaccurate legal proposition. The court characterized such misuses as “the norm rather than the exception” in Moales’s submissions. It stopped short of identifying all misused authorities but made clear that the inaccuracies were pervasive. Ruling/SanctionThe complaint was dismissed for lack of subject matter jurisdiction under Rule 12(h)(3). Moales was permitted to file an amended complaint by May 28, 2025, but was warned that future filings must be factually and legally accurate. The court declined to reach the venue issue or impose immediate sanctions but warned Moales that misrepresentation of law may violate Rule 11. Key Judicial ReasoningThe court found no basis for federal question jurisdiction and rejected Moales’s reliance on the Declaratory Judgment Act, constructive trust theories, and a nonexistent “federal common law of securities.” It also held that Moales failed to plausibly allege the amount in controversy necessary for diversity jurisdiction. |
||||||||
Anonymous v. Anonymous | Israel (Israel) | 29 April 2025 | Lawyer | Implied | At least five non-existent Israeli family court decisions, with fictitious file numbers and invented citations | Petition dismissed in limine; Plaintiff’s counsel ordered to pay ₪1,500 in personal costs to the state and ₪3,500 to the opposing party | 5000 ILS | |
AI UseThe plaintiff’s attorney denied deliberate use of generative AI, claiming the wrong file numbers were inserted by mistake. The court rejected this explanation, finding the hallucinated decisions did not exist in any legal archive and could not plausibly arise from mere misnumbering. The court accepted the defendant’s assertion that the fabricated citations originated from generative AI. Hallucination DetailsOut of five rulings cited in the petition, three were not found in any legal database. Two additional cases were filed after the hearing, but neither matched the original citations or contained the propositions advanced in the pleading. The court found the overall drafting pattern aligned with generative AI hallucination phenomena. Ruling/SanctionJudge Merav Eliyahu dismissed the petition and imposed personal costs of ₪1,500 against Plaintiff’s counsel (payable to the state) and ₪3,500 (payable to the opposing party). She cited Supreme Court precedent and ethical commentary emphasizing the risks of hallucinated legal drafting. She emphasized that lawyers must not rely blindly on AI tools and must always verify the authenticity of legal authorities cited in pleadings. Key Judicial ReasoningThe judge found that legal pleadings are the “foundational documents of judicial proceedings” and must be “accurate, reliable, and competently drafted.” Submitting fictitious judgments constitutes not only a procedural abuse but an ethical breach. Even absent bad faith, failure to verify AI-generated legal content breaches a lawyer’s core obligations. |
||||||||
Simpson v. Hung Long Enterprises Inc. | B.C. Civil Resolution Tribunal (Canada) | 25 April 2025 | Pro Se Litigant | Unidentified | Fictitious citations | Other side compensated for time spent through costs order (500 CAD) | — | |
"Ms. Simpson referred to a non-existent CRT case to support a patently incorrect legal position. She also referred to three Supreme Court of Canada cases that do not exist. Her submissions go on to explain in detail what legal principles those non-existent cases stand for. Despite these deficiencies, the submissions are written in a convincingly legal tone. Simply put, they read like a lawyer wrote them even though the underlying legal analysis is often wrong. These are all common features of submissions generated by artificial intelligence." [...] "25. I agree with Hung Long that there are two extraordinary circumstances here that justify compensation for its time. The first is Ms. Simpson’s use of artificial intelligence. It takes little time to have a large language model create lengthy submissions with many case citations. It takes considerably more effort for the other party to wade through those submissions to determine which cases are real, and for those that are, whether they actually say what Ms. Simpson purported they did. Hung Long’s owner clearly struggled to understand Ms. Simpson’s submissions, and his legal research to try to understand them was an utter waste of his time. I reiterate my point above that Ms. Simpson’s submissions cited a non-existent case in support of a legal position that is the precise opposite of the existing law. This underscores the impact on Hung Long. How can a self-represented party respond to a seemingly convincing legal argument that is based on a case it is impossible to find? 26. I am mindful that Ms. Simpson is not a lawyer and that legal research is challenging. That said, she is responsible for the information she provides the CRT. I find it manifestly unfair that the burden of Ms. Simpson’s use of artificial intelligence should fall to Hung Long’s owner, who tried his best to understand submissions that were not capable of being understood. While I accept that Ms. Simpson did not knowingly provide fake cases or misleading submissions, she was reckless about their accuracy." |
||||||||
Benjamin v. Costco Wholesale Corp | E.D.N.Y. (USA) | 24 April 2025 | Lawyer | ChatOn | Five fabricated case citations, and quotations | Monetary sanction; public reprimand; order to serve client with decision; no disciplinary referral due to candor and remediation | 1000 USD | |
AI UseCounsel used ChatOn to rewrite a reply brief with case law, under time pressure, without verifying the outputs. The five cases did not exist; citations were entirely fictional. Counsel later admitted this in a sworn declaration and at hearing, describing her actions as a lapse caused by workload and inexperience with AI. Hallucination DetailsFabricated cases included:
None of these cases matched any legal source. Counsel filed them as part of a sworn statement under penalty of perjury. Ruling/SanctionThe court imposed a $1,000 sanction payable to the Clerk; ordered the counsel to serve the order on her client and file proof of service. The court acknowledged her sincere remorse and remedial CLE activity, but emphasized the seriousness of submitting hallucinated cases under oath. Sanctions were tailored for deterrence, not punishment. Key Judicial ReasoningQuoting Park v. Kim and Mata v. Avianca, the court held that submitting legal claims based on nonexistent authorities without checking them constitutes subjective bad faith. Signing a sworn filing without knowledge of its truth is independently sanctionable. Time pressure is not a defense. Lawyers cannot outsource core duties to generative AI and disclaim responsibility for the results. |
||||||||
Nichols v. Walmart | S.D. Georgia (USA) | 23 April 2025 | Pro Se Litigant | Implied | Multiple fictitious legal citations | Case dismissed for lack of subject matter jurisdiction and as a Rule 11 sanction for bad-faith submission of fabricated legal authorities | — | |
AI UsePlaintiff submitted a motion to disqualify opposing counsel that cited multiple non-existent cases. She offered no clarification about how the citations were obtained or whether she had attempted to verify them. The court noted this failure and declined to excuse the misconduct, though it stopped short of attributing it directly to AI tools. Hallucination DetailsThe court reviewed Plaintiff’s motion and found that some of the cited cases did not exist. Despite being ordered to show cause, Plaintiff responded only with general statements about her good faith and complaints about perceived procedural unfairness, without addressing the origin or verification of the fake cases. Ruling/SanctionThe court dismissed the case for lack of subject matter jurisdiction and independently dismissed it as a sanction for bad-faith litigation under Rule 11. It found Plaintiff’s conduct—submitting fictitious legal authorities and refusing to take responsibility for them—warranted dismissal, even if monetary sanctions were not appropriate. The court cited Mata v. Avianca, Morgan v. Community Against Violence, and O’Brien v. Flick as relevant precedents affirming the sanctionability of hallucinated case law. Key Judicial ReasoningJudge Hall held that Plaintiff’s conduct went beyond excusable error. Her submission of fabricated cases, refusal to explain their origin, and attempts to shift blame to perceived procedural grievances demonstrated bad faith. The court concluded that dismissal—though duplicative of the jurisdictional ground—was warranted as a standalone sanction to deter future abuse by similarly situated litigants. |
||||||||
Brown v. Patel et al. | S.D. Texas (USA) | 22 April 2025 | Pro Se Litigant | Unidentified | 5 non-existent cases and misrepresentation of three others | Warning | — | |
Although no immediate sanctions were imposed, Magistrate Judge Ho explicitly warned Plaintiff that future misconduct of this nature may violate Rule 11 and lead to consequences. |
||||||||
Ferris v. Amazon.com Services | N.D. Mississippi (USA) | 16 April 2025 | Pro Se Litigant | ChatGPT | 7 fictitious cases | Plaintiff ordered to pay Defendant’s reasonable costs related to addressing the fabricated citations | — | |
AI UseMr. Ferris admitted at the April 8, 2025 hearing that he used ChatGPT to generate the legal content of his filings and even the statement he read aloud in court. The filings included at least seven entirely fictitious case citations. The court noted the imbalance: it takes a click to generate AI content but substantial time and labor for courts and opposing counsel to uncover the fabrications. Hallucination DetailsThe hallucinated cases included federal circuit and district court decisions, complete with plausible citations and jurisdictional diversity, crafted to lend credibility to Plaintiff’s intellectual property and employment-related claims. These false authorities were submitted both in the complaint and in opposition to Amazon’s motion to dismiss. Ruling/SanctionThe court found a Rule 11 violation and, while initially inclined to dismiss the case outright, chose instead to impose a compensatory monetary sanction. Amazon is entitled to submit a detailed affidavit of costs directly attributable to rebutting the false citations. The final monetary amount will be set in a subsequent order. Key Judicial ReasoningJudge Michael P. Mills condemned the misuse of generative AI as a serious threat to judicial integrity. Quoting Kafka (“The lie made into the rule of the world”), the court lamented the rise of “a post-truth world” and framed Ferris as an “avatar” of that dynamic. Nevertheless, it opted for the least severe sanction consistent with deterrence and fairness: compensatory costs under Rule 11. |
||||||||
Sims v. Souily-Lefave | D. Nevada (USA) | 15 April 2025 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
Vilmar Martins dos Santos v. State of Parana | Parana State (Brazil) | 11 April 2025 | Lawyer | Implied | Full appeal brief generated by AI tools, included the judges' names | Appeal dismissed; lawyers warned | — | |
Bevins v. Colgate-Palmolive Co. | E.D. Pa. (USA) | 10 April 2025 | Lawyer | Unidentified | 2 fake case citations and misstatements | Striking of Counsel’s Appearance + Referral to Bar Authorities + Client Notification Order | — | |
AI UseCounsel filed opposition briefs citing two nonexistent cases. The court suspected generative AI use based on "hallucination" patterns but Counsel neither admitted nor explained the citations satisfactorily. Failure to comply with a standing AI order aggravated sanctions. Hallucination DetailsTwo fake cases cited. Citation numbers and Westlaw references pointed to irrelevant or unrelated cases. No affidavit or real case documents were produced when ordered. Ruling/SanctionCounsel's appearance was struck with prejudice. The Court ordered notification to the State Bar of Pennsylvania and the Eastern District Bar. Consel was required to inform his client, Bevins, of the sanctions and the need for new counsel if re-filing. |
||||||||
Puerto Rico Soccer League NFP, Corp., et al. v. Federación Puertorriqueña de Futbol, et al. | D.C. Puerto Rico (USA) | 10 April 2025 | Lawyer | Unidentified | Fabricated citations, false quotes, and misrepresented precedents | Order to pay opposing counsel's fees | 1 USD | |
"A simple Google search would have shown the problems in some of Plaintiffs’ citations. In other instances, a quick search of the opinion Plaintiffs cited to would have revealed problems. Levying appropriate sanctions here promotes deterrence without being overly punitive, as contemplated by Rule 11(c)(4). The Court notes that, rather than showing contrition, the Memorandum in Compliance strikes a defiant and deflective tone. (Docket No. 190). It also contains more of the errors that plagued Plaintiffs’ previous four filings. For example, in the “Legal Standard” section of the memorandum, Plaintiffs cite to two cases for the proposition that sanctions are an “extreme remedy” appropriate for instances of prejudice or bad faith. One case makes no mention of sanctions and neither contain the proffered quote Id. at 3. The Court finds it problematic that Plaintiffs responded to a show cause order to address the problem of multiple inaccurate citations by providing a response containing more erroneous citations." |
||||||||
Bischoff v. South Carolina Department of Education | Admin Law Court, S.C. (USA) | 10 April 2025 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
The court held that: "It is likely that Appellant employed argument generated by an artificial intelligence (AI) program which contained the fictitious case citation and cautions Appellant that many harms flow from the use of non-existent case citations and fake legal authority generated by AI programs, including but not limited to the waste of judicial resources and time and waste of resources and time of the opposing party. Were courts to unknowingly rely upon fictitious citations, citizens and future litigants might question the validity of court decisions and the reputation of judges. If, alternatively, Appellant's use of a fictitious case was not the result of using an AI program, but was instead a conscious act of the Appellant, Appellant's action could be deemed a fraud on the Court. Appellant is hereby expressly warned that submission of fictitious case authorities may subject Appellant to sanctions under the S.C. Frivolous Proceedings Act, S.C. Code Ann. § 15-36-10(Supp. 2024)." |
||||||||
Shekartz v. Assuta Ashdod Ltd | Israel (Israel) | 7 April 2025 | Lawyer | Implied | 25 fake citations | Monetary sanction | 7000 ILS | |
Ayinde v. Borough of Haringey | High Court (UK) | 3 April 2025 | Lawyer | Unidentified | 5 fabricated cittions | Wasted costs order; Partial disallowance of Claimant’s costs; Order to send transcript to Bar Standards Board and Solicitors Regulation Authority | 11000 GBP | |
AI UseThe judgment states that the only other explanation for the fabricated cases was the use of artificial intelligence. Hallucination DetailsThe following five nonexistent cases were cited:
Ruling/SanctionThe court imposed wasted costs orders against both barrister and solicitor, reduced the claimant’s recoverable costs, and ordered the judgment to be provided to the BSB and SRA. |
||||||||
Daniel Jaiyong An v. Archblock, Inc. | Delaware Chancery (USA) | 3 April 2025 | Pro Se Litigant | Implied | At least three fabricated or misattributed case citations and multiple false quotations | Motion denied with prejudice; no immediate sanction imposed, but petitioner formally warned and subject to future certification and sanctions | — | |
AI UseThe petitioner submitted a motion to compel discovery that contained several fabricated or misleading citations. The court explicitly stated that the motion bore hallmarks of generative AI use and referenced ChatGPT’s known risk of “hallucinations.” Although the petitioner did not admit AI use, the court found the origin clear and required future filings to include a GenAI usage certification. Hallucination DetailsExamples included:
Court verified via Westlaw that some phrases returned only the petitioner’s motion as a result. Ruling/SanctionMotion to compel denied with prejudice. No immediate monetary sanction imposed, but petitioner was warned that further submission of fabricated authority may result in sanctions including monetary penalties or dismissal. Future filings must include a certification regarding the use of generative AI. Key Judicial ReasoningThe Vice Chancellor emphasized that GenAI can benefit courts and litigants, but careless use that results in fictitious legal authorities wastes resources and harms judicial integrity. |
||||||||
Zzaman v. HMRC | (UK) | 3 April 2025 | Pro Se Litigant | Implied | Case cited that did not provide authority for the propositions that were advanced | Warning | — | |
Plaintiff had disclosed the use of AI in preparing his statement of case. The court noted: "29. However, our conclusion was that Mr Zzaman’s statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman’s statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC)). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully “understand” what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal). There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court’s time and that of opposing parties." |
||||||||
D'Angelo v. Vaught | Illinois (USA) | 2 April 2025 | Lawyer | Archie (Smokeball) | Fabricated citation | Monetary sanction | 2000 USD | |
Dehghani v. Castro | New Mexico DC (USA) | 2 April 2025 | Lawyer | Unidentified | At least 6 entirely fictitious case citations in a habeas corpus filing | Monetary sanction; required CLE on legal ethics and AI; mandatory self-reporting to NM and TX state bars; report of subcontractor to NY state bar; required notification to LAWCLERK | 1500 USD | |
AI UseCounsel hired a freelance attorney through LAWCLERK to prepare a filing. He made minimal edits and admitted not verifying any of the case law before signing. The filing included multiple fabricated cases and misquoted others. The court concluded these were AI hallucinations, likely produced by ChatGPT or similar. Hallucination DetailsExamples of non-existent cases cited include: Moncada v. Ruiz, Vega-Mendoza v. Homeland Security, Morales v. ICE Field Office Director, Meza v. United States Attorney General, Hernandez v. Sessions, and Ramirez v. DHS. All were either entirely fictitious or misquoted real decisions. Ruling/SanctionThe Court sanctioned Counsel by:
Key Judicial ReasoningThe court emphasized that counsel’s failure to verify cited cases, coupled with blind reliance on subcontracted work, constituted a violation of Rule 11(b)(2). The court analogized to other AI-sanctions cases. While the fine was modest, the court imposed significant procedural obligations to ensure deterrence. |
||||||||
Mazurek et al. v. Thomazoni | Parana State (Brazil) | 2 April 2025 | Lawyer | ChatGPT | Multiple fictitious court decisions | Referral to the bar; Monetary fine (1% of case value) | — | |
SQBox Solutions Ltd. v. Oak | BC Civil Resolution Tribunal (Canada) | 31 March 2025 | Pro Se Litigant | Implied | Wrong quotes, fabricated citation | Litigant lost on merits | — | |
"By relying on inaccurate and false AI submissions, Mr. Oak hurts his own case. I understand that Mr. Oak himself might not be aware that the submissions are misleading, but they are his submissions and he is responsible for them. " |
||||||||
Source: Steve Finlay | ||||||||
Sanders v. United States | Fed. claims court (USA) | 31 March 2025 | Pro Se Litigant | Implied | 5 fabricated citations | Warning | — | |
AI UseThe plaintiff did not admit to using AI, but the court inferred likely use due to the submission of fabricated citations matching the structure and behavior typical of generative AI hallucinations. The decision referenced public concerns about AI misuse and cited specific examples of federal cases where similar misconduct occurred. Hallucination DetailsPlaintiff cited:
Ruling/SanctionThe court granted the government’s motion to dismiss for lack of subject matter jurisdiction under Rule 12(b)(1). Although the court found a clear Rule 11 violation, it opted not to sanction the plaintiff, citing the evolving context of AI use and the absence of bad faith. A formal warning was issued, with notice that future hallucinated filings may trigger sanctions. Key Judicial ReasoningJudge Roumel noted that plaintiff’s attempt to rely on fictional case law was a misuse of judicial resources and a disservice to her own advocacy. The court cited multiple precedents addressing hallucinated citations and AI misuse, stating clearly that while leeway is granted to pro se litigants, the line is crossed when filings rely on fictitious law. |
||||||||
McKeown v. Paycom Payroll LLC | W.D. Oklahoma (USA) | 31 March 2025 | Pro Se Litigant | Implied | Several fake citations | Submission stricken out, and warning | — | |
AI UseAlthough AI was not named and Plaintiff denied intentional fabrication, the court considered the citation (Adamov, 779 F.3d 851, 860 (8th Cir. 2015)) to be plainly fictitious. It noted the possibility that Plaintiff used generative AI tools, given the fabricated citation's plausible-sounding structure and mismatch with existing precedent. Hallucination DetailsPlaintiff submitted fabricated legal authorities in at least two filings, despite being explicitly warned by the court after the first incident. The false case cited in her sur-reply could not be located in any legal database. When asked to produce it, she responded that she had likely “garbled” the citation but provided no plausible alternative or correction. Ruling/SanctionThe court declined to dismiss the action as a sanction, citing the limitations pro se litigants face in accessing reliable legal research tools. However, it granted the defendant’s motion to strike Plaintiff’s two unauthorized sur-replies and formally warned her that further violations of Rule 11 would lead to sanctions, including monetary penalties, filing restrictions, or dismissal. |
||||||||
AQ v. BT | CRT (Canada) | 28 March 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresentated precedent | Arguments ignored | — | |
LYJ v. Occupational Therapy Board of Australia | Queensland (Australia) | 26 March 2025 | Pro Se Litigant | ChatGPT | At least one fabricated case citation, verified by the Tribunal as non-existent | No sanction; Fabrication noted; Warning issued regarding AI use | — | |
AI UseThe applicant cited Crime and Misconduct Commission v Chapman [2007] QCA 283 in support of a key submission. The Tribunal was unable to locate such a case. It queried ChatGPT, which returned a detailed but entirely fictitious account of a case that does not exist. The Tribunal attached Queensland’s AI usage guidelines to its reasons and emphasized that the responsibility for accuracy lies with the party submitting the material. Ruling/SanctionThe fabricated case was disregarded. The Tribunal granted a stay but issued a strong warning: litigants are responsible for understanding the limitations of AI tools and must verify all submitted material. The inclusion of fictitious material wastes time, diminishes credibility, and undermines the process. Key Judicial ReasoningCiting non-existent authorities "weakens their arguments. It raises issues about whether their submission can be considered as accurate and reliable. It may cause the Tribunal to be less trusting of other submissions which they make. It wastes the time for Tribunal members in checking and addressing these hallucinations. It causes a significant waste of public resources." |
||||||||
Source: Natural & Artificial Intelligence in Law | ||||||||
Kruglyak v. Home Depot U.S.A., Inc. | W.D. Virginia (USA) | 25 March 2025 | Pro Se Litigant | ChatGPT | Multiple fictitious case citations and misrepresentations | No monetary sanctions; Warning | — | |
AI UseKruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources. Hallucination DetailsThe plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones. Ruling/SanctionMagistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur. Key Judicial ReasoningThe court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action. |
||||||||
Anonymous v. Anonymous | Israel (Israel) | 24 March 2025 | — | Fabricated citations | Application dismissed | 4000 ILS | ||
Buckner v. Hilton Global | W.D. Kentucky (USA) | 21 March 2025 | Pro Se Litigant | Implied | At least 2 fake citations | Warning | — | |
In a subsequent Order, the court pointed out that "This Court's opinion pointing out Buckner's citation to nonexistent case law, along with its implications, is an issue for appeal and not a valid basis for recusal. " |
||||||||
Williams v. Capital One Bank | D. DC (USA) | 18 March 2025 | Pro Se Litigant | CoCounsel | Multiple fictitious legal authorities | Case dismissed with prejudice for failure to state a claim. No monetary sanction imposed, but the court issued a formal warning | — | |
AI UseWhile not formally admitted, Plaintiff’s opposition brief referred to “legal generative AI program CoCounsel,” and the court noted that the structure and citation pattern were consistent with AI-generated output. Capital One was unable to verify several case citations, prompting the court to scrutinize the submission. Hallucination DetailsAt least one case was fully fabricated, and another was a real case misattributed to the wrong jurisdiction and reporter. The court emphasized that it could not determine whether the mis-citations were the result of confusion, poor research, or hallucinated AI output—but the burden rested with the party filing them. Ruling/SanctionThe court dismissed the complaint with prejudice, noting Plaintiff had already filed and withdrawn a prior version and had had full opportunity to amend. Though it did not impose monetary sanctions, it issued a strong warning and directed Plaintiff to notify other courts where he had similar pending cases if any filings included erroneous AI-generated citations. |
||||||||
Stevens v. BJC Health System | Missouri CA (USA) | 18 March 2025 | Pro Se Litigant | Implied | 6 fabricated citations | Warning | — | |
Alkuda v. McDonald Hopkins Co., L.P.A. | N.D. Ohio (USA) | 18 March 2025 | Pro Se Litigant | Implied | Fake Citations | Warning | — | |
Condominium v. Lati Initiation and Construction Ltd | Israel (Israel) | 17 March 2025 | — | Implied | Three fake citations | Case dismissed | 1000 ILS | |
LMN v. STC (No. 2) | (New Zealand) | 17 March 2025 | Pro Se Litigant | Implied | 1 fabricated citation | Warning | — | |
A v. B | Florence (Italy) | 13 March 2025 | Lawyer | ChatGPT | Fabricated case law citations | No financial sanction; Formal Judicial Reprimand; Findings of procedural misuse | — | |
AI UseThe respondent retailer's defense cited Italian Supreme Court judgments that did not exist, claiming support for their arguments regarding lack of subjective bad faith. During subsequent hearings, it was admitted that these fake citations were generated by ChatGPT during internal research by an assistant, and the lead lawyer had failed to independently verify them. Hallucination DetailsCited fabricated cassation rulings allegedly supporting subjective good faith defenses. No such rulings could be found in official databases; court confirmed their nonexistence. Hallucinated decisions related to counterfeit goods sales defenses Ruling/SanctionThe court declined to impose a financial sanction under Article 96 Italian Code of Civil Procedure . |
||||||||
Reddan & An Bord Pleanála v. Trustees of Nenagh Golf Club | (Ireland) | 13 March 2025 | Pro Se Litigant | Unidentified | Pseudolegal and irrelevant submissions | Application for Judicial Review Denied; Express Judicial Rebuke for Misuse of AI | — | |
AI UseJustice Nolan suspected that Reddan's submissions, especially references to "subornation to perjury" and Constitutional Article 40 rights, were AI-generated, exhibiting typical hallucination patterns (pseudo-legal concepts, inappropriate cut-and-paste fragments). Reddan did not admit using AI but relied on internet-sourced legal arguments that closely resembled LLM-style outputs. Hallucination DetailsInappropriate invocation of "subornation to perjury," a term foreign to Irish law. Constitutional and criminal law citations (Article 40, Non-Fatal Offences Against the Person Act) irrelevant to judicial review context. Assertions framed in hyperbolic, sensationalist terms without factual or legal basis. General incoherence of pleadings, consistent with AI-generated pseudo-legal text Ruling/SanctionThe High Court refused leave to apply for judicial review on all nine grounds. While no formal financial sanction was imposed, Justice Nolan issued a sharp rebuke, highlighting the improper use of AI and warning against making scurrilous, unverified allegations in legal pleadings. The Court stressed that misuse of AI-generated material could itself amount to an abuse of the judicial process. |
||||||||
Sheets v. Presseller | M.D. Florida (USA) | 11 March 2025 | Pro Se Litigant | Implied | Allegations by the other party that brief was AI-generated | Warning | — | |
Arnaoudoff v. Tivity Health Incorporated | D. Arizona (USA) | 11 March 2025 | Pro Se Litigant | ChatGPT | Fake citations | Court ignored fake citations and granted motion to correct the record | — | |
210S LLC v. Di Wu | Hawaii (USA) | 11 March 2025 | Pro Se Litigant | Implied | Fictitious citation and misrepresentation | Warning | — | |
Nguyen v. Wheeler | E.D. Arkansas (USA) | 3 March 2025 | Lawyer | Implied | 4 fictitious case citations, with fabricated quotes | Monetary sanction | 1000 USD | |
AI UseNguyen did not confirm which AI tool was used but acknowledged that AI “may have contributed.” The court inferred the use of generative AI from the pattern of hallucinated citations and accepted Nguyen’s candid acknowledgment of error, though this did not excuse the Rule 11 violation. Hallucination DetailsFictitious citations included:
None of these cases existed in Westlaw or Lexis, and the quotes attributed to them were fabricated. Outcome / SanctionThe court imposed a $1,000 monetary sanction on Counsel for citing non-existent case law in violation of Rule 11(b). It found her conduct unjustified, despite her apology and explanation that AI may have been involved. The court emphasized that citing fake legal authorities is an abuse of the adversary system and warrants sanctions. |
||||||||
Ahmad Harsha v. Reuven Bornovski | (Israel) | 2 March 2025 | Lawyer | Implied | Fabricated citations | The defendant was given the opportunity to submit amended summaries in response | 4000 ILS | |
Dog Rights v. Ministry of Agriculture | High Court (Israel) | 28 February 2025 | Lawyer | Impled | Numerous fabricated legal citations and case law authorities | Petition dismissed on threshold grounds for lack of clean hands and inadequate legal foundation. Petitioner ordered to pay costs | 7000 ILS | |
AI UseThe judgment refers repeatedly to use of “AI-based websites” and “artificial intelligence hallucinations,” and quotes prior decisions warning against reliance on AI without verification. Although no specific tool was named, the Court inferred use based on the stylistic pattern and total absence of real citations. Petitioner provided no clarification and ultimately sought to withdraw the petition once challenged. Hallucination DetailsThe legal authorities cited in the petition included:
The Court made efforts to locate the decisions independently but failed, and the petitioner never supplied the sources after being ordered to do so. Ruling/SanctionThe Court dismissed the petition in limine (on threshold grounds), citing “lack of clean hands” and “deficient legal infrastructure.” It imposed a ₪7,000 costs order against the petitioner and referred to the growing body of jurisprudence on AI hallucinations. The Court explicitly warned that future petitions tainted by similar conduct would face harsher responses, including possible professional discipline. Key Judicial ReasoningJustice Noam Sohlberg, writing for the panel, observed that citing fictitious legal authorities—whether by AI or not—is as egregious as factual misrepresentation. "there is no justification for distinguishing, factually, between one form of deception and another. Deception that would justify the dismissal of a petition due to lack of clean hands—such deception, whether of this kind or that—is invalid in its essence; both forms demand proper judicial response. Their legal identity is the same." |
||||||||
Bunce v. Visual Technology Innovations, Inc. | E.D. Pa. (USA) | 27 February 2025 | Lawyer | ChatGPT | 2 fake case citations + citation of vacated and inapposite cases. | Monetary Sanction + Mandatory CLE on AI and Legal Ethics | 2500 USD | |
AI UseCounsel admitted using ChatGPT to draft two motions (Motion to Withdraw and Motion for Leave to Appeal), without verifying the cases or researching the AI tool’s reliability. Hallucination Details2 Fake cases:
Misused cases:
Ruling/SanctionThe Court sanctioned Counsel $2,500 payable to the court and ordered him to complete at least one hour of CLE on AI and legal ethics. The opinion emphasized that deterrence applied both specifically to Counsel and generally to the profession. Key Judicial ReasoningRule 11(b)(2) mandates reasonable inquiry into all legal contentions. No AI tool displaces the attorney’s personal duty. Novelty of AI tools is not a defense. |
||||||||
Merz v. Kalama | W.D. Washington (USA) | 25 February 2025 | Pro Se Litigant | Unidentified | Wrong legal advice | — | ||
Wadsworth v. Walmart (Morgan & Morgan) | Wyoming (USA) | 24 February 2025 | Lawyer | Internal tool (ChatGPT) | 8 of 9 Fake/Flawed Cases | $3k Fine + Pro Hac Vice Revoked (Drafter); $1k Fine each (Signers); Remedial actions noted. | 5000 USD | |
AI UseCounsel from Morgan & Morgan used the firm's internal AI platform (MX2.law, reportedly using ChatGPT) to add case law support to draft motions in limine in a product liability case concerning a hoverboard fire. This was reportedly his first time using AI for this purpose. Hallucination DetailsEight out of nine case citations in the filed motions were non-existent or led to differently named cases. Another cited case number was real but belonged to a different case with a different judge. The legal standard description was also deemed "peculiar". Ruling/SanctionAfter defense counsel raised issues, the Judge issued an order to show cause. The plaintiffs' attorneys admitted the error, withdrew the motions, apologized, paid opposing counsel's fees related to the motions, and reported implementing new internal firm policies and training on AI use. Judge Rankin found Rule 11 violations. Sanctions imposed were: $3,000 fine on the drafter and revocation of his pro hac vice admission; $1,000 fine each on the signing attorneys for failing their duty of reasonable inquiry before signing. Key Judicial ReasoningThe court acknowledged the attorneys' remedial steps and honesty but emphasized the non-delegable duty under Rule 11 to make a reasonable inquiry into the law before signing any filing. The court stressed that while AI can be a tool, attorneys remain responsible for verifying its output. The judge noted this was the "latest reminder to not blindly rely on AI platforms' citations". |
||||||||
Plonit v. Sharia Court of Appeals | High Court (Israel) | 23 February 2025 | Lawyer | Unidentified | 36+ Flawed/Fake Citations | Petition Dismissed Outright; Warning re: Costs/Discipline. | — | |
AI UseThe petitioner’s counsel used an AI-based platform to draft the legal petition. Hallucination DetailsThe petition cited 36 fabricated or misquoted Israeli Supreme Court rulings. Five references were entirely fictional, 14 had mismatched case details, and 24 included invented quotes. Upon judicial inquiry, counsel admitted reliance on an unnamed website recommended by colleagues, without verifying the information's authenticity. The Court concluded that the errors were likely the product of generative AI. Ruling/SanctionThe High Court of Justice dismissed the petition on the merits, finding no grounds for intervention in the Sharia courts’ decisions. Despite the misconduct, no personal sanctions or fines were imposed on counsel, citing it as the first such incident to reach the High Court and adopting a lenient stance “far beyond the letter of the law.” However, the judgment was explicitly referred to the Court Administrator for system-wide attention. Key Judicial ReasoningThe Court issued a stern warning about the ethical duties of lawyers using AI tools, underscoring that professional obligations of diligence, verification, and truthfulness remain intact regardless of technological convenience. The Court suggested that in future cases, personal sanctions on attorneys might be appropriate to protect judicial integrity. |
||||||||
Unnamed Brazilian litigant | (Brazil) | 18 February 2025 | Lawyer | ChatGPT | Multiple fabricated case citations and doctrinal references | Appeal partially granted (reintegration suspended, rent imposed), but litigant sanctioned for bad-faith litigation; 10% fine on the updated value of the case; copy of filing sent to OAB-SC for disciplinary review | — | |
AI UseThe appellant’s counsel admitted to having used ChatGPT, claiming the submission of false case law was the result of “unintentional use.” The fabricated citations were used in an appeal against a reintegration of possession order, in favor of the appellant’s stepmother and father’s heirs. Hallucination DetailsThe brief contained numerous non-existent judicial precedents and references to legal doctrine that were either incorrect or entirely fictional. The court described them as “fabricated” and considered them serious enough to potentially mislead the court. Ruling/SanctionWhile the 6th Civil Chamber temporarily suspended the reintegration order, it further imposed a 10% fine on the value of the claim for bad-faith litigation and ordered that a copy of the appeal be forwarded to the Santa Catarina section of the Brazilian Bar Association (OAB/SC) for further investigation. Key Judicial ReasoningThe court emphasized that the legal profession is a public calling entailing duties and responsibilities. It cautioned that AI must be used “with caution and restraint”. The chamber unanimously supported the sanction. |
||||||||
Saxena v. Martinez-Hernandez et al. | D. Nev. (USA) | 18 February 2025 | Pro Se Litigant | Implied | At least two fabricated citations. | Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor | — | |
AI UseThe plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them. Hallucination Details
The court found no plausible explanation for these citations other than AI generation or outright fabrication. Ruling/SanctionThe court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith. Key Judicial ReasoningCiting Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief". |
||||||||
Geismayr v. The Owners, Strata Plan KAS 1970 | Civil Resolution Tribunal (Canada) | 14 February 2025 | Pro Se Litigant | Copilot | Ten fabricated citations | Citations ignored | — | |
Goodchild v State of Queensland | Queensland Industrial Relations Commission (Australia) | 13 February 2025 | Pro Se Litigant | "Internet searches" | Fabricated citation | Relevant submissions ignored | — | |
"The Commission accepts the Applicant's explanation. Given that there appears to be significant doubt over whether the authorities cited by the Applicant represent actual decisions from the Fair Work Commission, I will give the authorities cited by the Applicant no weight in determining whether she has provided an explanation for the delay. This appears to be a salutary lesson for litigants in the dangers of relying on general search engines on the internet or artificial intelligence when preparing legal documents." |
||||||||
Luck v Commonwealth of Australia | Federal Court (Australia) | 11 February 2025 | — | Non-existent legal authorities | The court dismissed the applicant's interlocutory application for disqualification and referral to a Full Court. | — | ||
QWYN and Commissioner of Taxation | Administrative Review Tribunal of Australia (Australia) | 5 February 2025 | Lawyer | Copilot | Copilot, generated a fabricated paragraph from a legal guidance. | The Tribunal affirmed the decision under review, rejecting the applicant's submissions based on the AI-generated content. | — | |
"The Applicant engaged the Copilot [Microsoft’s Artificial Intelligence product] in a range of probing questions pertaining to superannuation and taxation matters, upon which in part, it returned the following responses: The Explanatory Memorandum to the Taxation Laws Amendment (Superannuation) Bill 1992, which introduced the new regime taxing superannuation benefits, states in paragraph 2.20 that “the Bill will provide a tax rebate of 15 per cent for disability superannuation pensions. This will apply to all disability pensions, irrespective of whether they are paid from a taxed or an untaxed source. The rebate recognises that disability pensions are paid as compensation for the loss of earning capacity and are not merely a form of retirement income.
|
||||||||
Valu v. Minister for Immigration and Multicultural Affairs | (Australia) | 31 January 2025 | Lawyer | ChatGPT | 16 non-existent court cases, and fabricated quotes | Referral to Legal Services Commissioner | — | |
AI UseCounsel used ChatGPT to generate a summary of cases for a submission, which included fictitious Federal Court decisions and invented quotes from a Tribunal ruling. He inserted this output into the brief without verifying the sources. Counsel later admitted this under affidavit, citing time pressure, health issues, and unfamiliarity with AI's risks. He noted that guidance from the NSW Supreme Court was only published after the filing. Hallucination DetailsThe 25 October 2024 submission cited at least 16 completely fabricated decisions (e.g. Murray v Luton [2001] FCA 1245, Bavinton v MIMA [2017] FCA 712) and included supposed excerpts from the AAT’s ruling that did not appear in the actual decision. The Court and Minister’s counsel were unable to verify any of the cited cases or quotes. Ruling/SanctionJudge Skaros ordered referral to the OLSC under the Legal Profession Uniform Law (NSW) 2014, noting breaches of rules 19.1 and 22.5 of the Australian Solicitors’ Conduct Rules. The Court accepted Counsel’s apology and health-related mitigation but found that the conduct fell short of professional standards and posed systemic risks given increasing AI use in legal practice. Key Judicial ReasoningWhile acknowledging that Counsel corrected the record and showed contrition, the Court found that the damage—including wasted judicial resources and delay to proceedings—had already occurred. The ex parte email submitting corrected materials, without notifying opposing counsel, further compounded the breach. Given the public interest in safeguarding the integrity of litigation amidst growing AI integration, referral to the OLSC was deemed necessary, even without naming Counsel in the judgment. |
||||||||
Gonzalez v. Texas Taxpayers and Research Association | W.D. Texas (USA) | 29 January 2025 | Lawyer | Lexis Nexis's AI | Fabricated citation(s), false quotes, misrepresented precedents | Plaintiff's response was stricken and monetary sanctions were imposed. | 3961 USD | |
In the case of Gonzalez v. Texas Taxpayers and Research Association, the court found that Plaintiff's counsel, John L. Pittman III, included fabricated citations, miscited cases, and misrepresented legal propositions in his response to a motion to dismiss. Pittman initially denied using AI but later admitted to using Lexis Nexis's AI citation generator. The court granted the defendant's motion to strike the plaintiff's response and imposed monetary sanctions on Pittman, requiring him to pay $3,852.50 in attorney's fees and $108.54 in costs to the defendant. The court deemed this an appropriate exercise of its inherent power due to the abundance of technical and substantive errors in the brief, which inhibited the defendant's ability to efficiently respond. |
||||||||
Olsen v Finansiel Stabilitet | High Court (UK) | 25 January 2025 | Pro Se Litigant | Implied | One fake legal case and summary | No contempt, but might bear out on costs | — | |
Source: Natural & Artificial Intelligence in Law | ||||||||
Fora Financial Asset Securitization v. Teona Ostrov Public Relations | NY SC (USA) | 24 January 2025 | Lawyer | Implied | Several fake citations, 1 fake quotation | No sanction imposed; court struck the offending citations and warned that repeated occurrences may result in sanctions | — | |
AI UseThe court noted “problems with several citations leading to different or non-existent cases and a quotation that did not appear in any cases cited” in defendants’ reply papers. While the court did not identify AI explicitly, it flagged the issue and indicated that repeated infractions could lead to sanctions. Ruling/SanctionNo immediate sanction. The court granted plaintiff’s motion in part, striking thirteen of eighteen affirmative defenses. It emphasized that if citation issues persist, sanctions will follow. |
||||||||
Body by Michael Pty Ltd and Industry Innovation and Science Australia | Administrative Review Tribunal (Australia) | 24 January 2025 | Pro Se Litigant | ChatGPT | Fabricated references to non-existent cases (withdrawn prior to the hearing) | Fake references withdrawn before the hearing | — | |
"Nevertheless, due to that withdrawal being requested prior to the hearing, I have not considered those paragraphs, these reasons for decision do not take account of those paragraphs and I merely make some general comments below applicable to all parties that appear before the Tribunal. The use of Chat GPT is problematic for the Tribunal. It perhaps goes without saying that it is not acceptable for a party to attempt to mislead the Tribunal by citing case law that is non-existent or citing legal conclusions that do not follow, whether that attempt is deliberate or otherwise. All parties should be aware that the Tribunal checks and considers all cases and conclusions referred to in both parties’ submissions in any event. This matter would have inevitably been discovered, and adverse inferences may have been drawn. To ensure no such adverse inferences are drawn, parties are encouraged to use publicly available databases to search for case law and not to seek to rely on artificial intelligence." |
||||||||
Arajuo v. Wedelstadt et al | E.D. Wisconsin (USA) | 22 January 2025 | Lawyer | Unidentified | Multiple non-existent cases | Warning | — | |
AI UseCounsel admitted using a “new legal research medium”, appears to be a generative AI system or platform capable of generating fictitious case law. Counsel did not deny using AI, but claimed the system may have been corrupted or unreliable. The amended filing removed the false authorities. Hallucination DetailsThe court did not identify the specific fake cases but confirmed that “citations to non-existent cases” were included in Defendants’ original brief. Counsel’s subsequent filing corrected the record but did not explain how the citations passed into the brief in the first place. Ruling/SanctionJudge William Griesbach denied the motion for summary judgment on the merits, but addressed the citation misconduct separately. He cited Rule 11 and Park v. Kim (91 F.4th 610, 615 (2d Cir. 2024)) to underline the duty to verify. No formal sanctions were imposed, but counsel was explicitly warned that further use of non-existent authorities would not be tolerated. Key Judicial ReasoningThe court emphasized that even if the submission of false citations was not malicious, it was still a serious breach of Rule 11 obligations. Legal contentions must be “warranted by existing law,” and attorneys are expected to read and confirm cited cases. The failure to do so, even if caused by AI use, is unacceptable. The court accepted counsel’s corrective effort but insisted that future violations would be sanctionable. |
||||||||
Strike 3 Holdings LLC v. Doe | C.D. California (USA) | 22 January 2025 | Lawyer | Ulokued | 3 entirely fictitious cases | — | ||
Key Judicial ReasoningMagistrate Judge Sheri Pym found the motion legally deficient on multiple grounds. In addition, she emphasized that counsel must not rely on fake or unverified authority. She cited Mata, Park, Gauthier, and others as cautionary examples of courts imposing sanctions for AI-generated hallucinations. The court reaffirmed that the use of AI does not lessen the duty to verify the existence and relevance of cited law. |
||||||||
Candice Dias v Angle Auto Finance | Fair Work Commission (Australia) | 20 January 2025 | Pro Se Litigant | Implied | Fabricated citations | — | ||
United States v. Hayes | E.D. Cal. (USA) | 17 January 2025 | Federal Defender | Unidentified | One fake case citation with fabricated quotation | Formal Sanction Imposed + Written Reprimand | — | |
AI UseDefense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible. Hallucination DetailsFrancisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere. Ruling/SanctionThe Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted. Key Judicial ReasoningThe court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record. |
||||||||
Source: Volokh | ||||||||
Strong v. Rushmore Loan Management Services | D. Nebraska (USA) | 15 January 2025 | Pro Se Litigant | Implied | “highly suspicious” signs of generative AI use | Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions | — | |
O’Brien v. Flick and Chamberlain | S.D. Florida (USA) | 10 January 2025 | Pro Se Litigant | Implied | 2 fabricated citations | Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations | — | |
AI UseAlthough O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.” Ruling/SanctionThe court dismissed the case with prejudice on dual grounds:
Key Judicial ReasoningJudge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers. |
||||||||
Kohls v. Ellison | Minnesota (USA) | 10 January 2025 | Expert | GPT-4o | Fake Academic Citations | Expert Declaration Excluded | — | |
AI UseProfessor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections. Hallucination DetailsThe declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations. Ruling/SanctionProfessor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable. Key Judicial ReasoningThe court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted. |
||||||||
Source: Volokh | ||||||||
Mavundla v. MEC | High Court (South Africa) | 8 January 2025 | Lawyer | Implied | At least 7 non-existent or irrelevant cases | Leave for appel dismissed with costs; referral to Legal Practice Council | — | |
AI UseThe judgment does not explicitly confirm that generative AI was used, but the judge strongly suspects ChatGPT or a similar tool was the source. The judge even ran prompts into ChatGPT and confirmed that the tool responded with fabricated support for the same fake cases used in the submission. Counsel blamed overwork and delegation to a candidate attorney (Ms. Farouk), who denied AI use but gave vague and evasive answers. Hallucination DetailsFabricated or misattributed cases included:
The supplementary notice of appeal included misleading summaries with no accurate paragraph citations, and no proper authority was ever provided for key procedural points. Ruling/Sanction
Key Judicial ReasoningJustice Bezuidenhout issued a lengthy and stern warning on the professional obligation to verify authorities. She held that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional,” and emphasized that even ignorance of AI’s flaws does not excuse unethical conduct. The judgment discusses comparative standards, ethical obligations, and recent literature in detail. |