This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
See excluded examples.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (107 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media and online posts.2
Examples include:
- M. Hiltzik, AI ‘hallucinations’ are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
If you know of a case that should be included, feel free to
contact me.
Click to Download CSV
Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details |
---|---|---|---|---|---|---|---|---|
Doe v. Noem | D.C. DC (USA) | 1 July 2025 | Lawyer | Implied | One fabricated authority | Order to Show Cause | — | |
Fake citation, in this brief, was to : Moms Against Poverty v. Dep’t of State, 2022 WL 17951329, at *3 . Case docket can be found here. |
||||||||
Shahid v. Esaam | Georgia CA (USA) | 30 June 2025 | Judge, Lawyer | Unidentified | Several fabricated cases, as well as misrepresented ones, some of which were adopted by the trial court below | Case remanded; monetary penalty | 2500 USD | |
" After the trial court entered a final judgment and decree of divorce, Nimat Shahid (“Wife”) filed a petition to reopen the case and set aside the final judgment, arguing that service by publication was improper. The trial court denied the motion, using an order that relied upon non-existent case law." "We are troubled by the citation of bogus cases in the trial court's order. As the reviewing court, we make no findings of fact as to how this impropriety occurred, observing only that the order purports to have been prepared by Husband's attorney, Diana Lynch. We further note that Lynch had cited the two fictitious cases that made it into the trial court's order in Husband's response to the petition to reopen, and she cited additional fake cases both in that Response and in the Appellee's Brief filed in this Court. " |
||||||||
Jakes v. Youngblood | W.D. Penn. (USA) | 26 June 2025 | Lawyer | Unidentified | Multiple fabricated quotes, including from the court's previous opinions, and misrepresentations | Motion is dismissed; Order to show cause | — | |
Source: Volokh | ||||||||
Couvrette v. Wisnovsky | Oregon (USA) | 14 June 2025 | Lawyer | Unidentified | Fifteen non-existent cases and misrepresented quotations from seven real cases | Order to Show Cause re:Sanctions | — | |
Counsel said that "The inclusion of inaccurate citations was inadvertent and the result of reliance on an automated legal citation tool." |
||||||||
Rochon Eidsvig & Rochon Hafer v. JGB Collateral | Texas CA (USA) | 12 June 2025 | Lawyer | Implied | Four fabricated cases | 8 mandatory hours of Continuous Legal Education on ethics and AI | — | |
"It is never acceptable to rely on software or technology—no matter how advanced—without reviewing and verifying the information. The use of AI or other technology does not excuse carelessness or failure to follow professional standards. Technology can be helpful, but it cannot replace a lawyer’s judgment, research, or ethical responsibilities. The practice of law changes with the use of new technology, but the core duties of competence and candor remain the same. Lawyers must adapt to new tools without lowering their standards." |
||||||||
Goins v. Father Flanagan's Boys Home | D. Nebraska (USA) | 5 June 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresented authorities | Warning | — | |
" This Court's local rules permit the use of generative artificial intelligence programs, but all parties, including pro se parties, must certify “that to the extent such a program was used, a human signatory of the document verified the accuracy of all generated text, including all citations and legal authority,” NECivR 7.1(d)(4)(B). The plaintiff's brief contains no such certification, nor does the plaintiff deny using artificial intelligence. See filing 27 at 9.
|
||||||||
Lipe v. Albuquerque Public Schools | D. New Mexico (USA) | 4 June 2025 | Lawyer | Implied | Fabricated Citations | Show cause proceedings | — | |
Court noted that Counsel was still citing fabricated authorities, even though show cause proceedings are ongoing in parallel. |
||||||||
Powhatan County School Board v. Skinger et al | E.D. Virginia (USA) | 2 June 2025 | Pro Se Litigant | ChatGPT | Fabricated citations | Relevant motions stricken | — | |
"The pervasive misrepresentations of the law in Lucas' filings cannot be tolerated. It serves to make a mockery of the judicial process. It causes an enormous waste of judicial resources to try to find cited cases that do not exist and to determine whether a cited authority is relevant or binding, only to determine that most are neither. In like fashion, Lucas' adversaries also must run to ground the nonexistent cases or address patently irrelevant ones. The adversaries must thus incur needless legal fees and expenses caused by Lucas' pervasive citations to nonexistent or irrelevant cases. [...] However, as previously noted Lucas appears to be judgment proof so monetary sanctions likely will not deter her from the abusive practices reflected in her filings and in her previously announced, consistently followed, abuse of the litigation proceedings created by the Individuals with Disabilities Education Act, 20 U.S.C. § 1400, et seq. (“IDEA”). So, the Court must find some other way to protect the interests of justice and to deter Lucas from the abuses which have come to mark her approach to participation as a defendant in the judicial process. In this case, the most appropriate remedy is to strike Lucas' filings where they are burdensome by virtue of volume and exceed permitted page limits, where they are not cogent or understandable (when given the generous latitude afforded pro se litigants), and where they misrepresent the law by citing nonexistent or utterly irrelevant cases." |
||||||||
Andersen v. Olympus as Daybreak | D. Utah (USA) | 30 May 2025 | Pro Se Litigant | Implied | Fabricated citations and misrepresentation of past cases | Warning | — | |
In an earlier decision, the court had already warned the plaintiff against "any further legal misrepresentations in future communications". |
||||||||
Delano Crossing v. County of Wright | Minnesotta Tax Court (USA) | 29 May 2025 | Lawyer | Unidentified | Five fabricated citations | Breach of Rule 11, but no monetary sanction warranted; referred counsel to Lawyers Professional Responsibility Board | — | |
AI UseAttorneys for Wright County submitted a memorandum in support of a motion for summary judgment that contained five case citations generated by artificial intelligence; these citations did not refer to actual judicial decisions. Much of the brief appeared to be AI-written. The attorney who signed and filed the brief, acknowledged that the cited authorities did not exist and that much of the brief was drafted by AI. Ruling/SanctionThe Court found Counsel's conduct violated Rule 11.02(b) of the Minnesota Rules of Civil Procedure, as fake case citations cannot support any legal claim and there's an affirmative duty to investigate the legal underpinnings of a pleading. The Court found no merit in Counsel's defense, noting that the substitute cases she offered did not support the legal contentions in the brief, and the brief demonstrated a fundamental misunderstanding of legal standards. The Court did not find her insinuation that another, accurate motion document existed to be credible. Although the Court considered summarily denying the County's motion as a sanction, it ultimately denied the motion on its merits in a concurrent order because the arguments were so clearly incorrect. The Court declined to order further monetary sanctions, believing its Order to Show Cause and the current Order on Sanctions were sufficient to deter Counsel from relying solely on AI for case citations or legal conclusions in the future. However, the Court referred the matter concerning Counsel's conduct to the Minnesota Lawyers Professional Responsibility Board for further review, as the submission of an AI-generated brief with fake citations raised questions regarding her honesty, trustworthiness, and fitness as a lawyer. |
||||||||
Byoplanet International, LLC v. Knecht / Gilstrap / Johansson | S.D. Florida (USA) | 29 May 2025 | Lawyer | Unidentified | Fabricated citations and quotes | Order to show cause | — | |
In their Answer, Counsel revealed that "specific citations and quotes in question were inadvertently derived from internal draft text prepared using generative AI research tools designed to expedite legal research and brief drafting". |
||||||||
Anita Krishnakumar et al. v. Eichler Swim and Tennis Club | CA SC (USA) | 29 May 2025 | Lawyer | Implied | Fabricated citation and quotes | Argument lost on the merits in tentative ruling | — | |
The underlying motion was later withdrawn, with the result that the tentative ruling was not adopted. |
||||||||
Mid Cent. Operating Eng'rs Health v. Hoosiervac | S.D. Ind. (USA) | 28 May 2025 | Lawyer | Unidentified | 3 fake case citations | Monetary Sanction | 6000 USD | |
(Earlier report and recommendation can be found here.) AI UseCounsel admitted at a show cause hearing that he used generative AI tools to draft multiple briefs and did not verify the citations provided by the AI, mistakenly trusting their apparent credibility without checking. Hallucination DetailsThree distinct fake cases across filings. Each was cited in a separate brief, with no attempt at Shepardizing or KeyCiting. Ruling/SanctionThe Court recommended a $15,000 sanction ($5,000 per violation), with the matter referred to the Chief Judge for potential additional professional discipline. Counsel was also ordered to notify Hoosiervac LLC’s CEO of the misconduct and file a certification of compliance. Eventually, the court fined Counsel $6,000, stressing that this was sufficient. Key Judicial ReasoningThe judge stressed that "It is one thing to use AI to assist with initial research, and even nonlegal AI programs may provide a helpful 30,000-foot view. It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented. Confirming a case is good law is a basic, routine matter and something to be expected from a practicing attorney. As noted in the case of an expert witness, an individual's "citation to fake, AI-generated sources . . . shatters his credibility." See Kohls v. Ellison, No. 0:24-cv-03754-LMP-DLM, Doc. 46 at *10 (D. Minn. Jan. 10, 2025)." |
||||||||
Brick v. Gallatin County | D. Montana (USA) | 27 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | |
Concord v. Anthropic | N.D. California (USA) | 23 May 2025 | Expert | Claude.ai | Fabricated attribution and title for (existing) article | Part of brief was struck; court took it into account as a matter of expert credibility | — | |
Counsel's explanation of what happened can be found here. |
||||||||
Source: Volokh | ||||||||
Luther v. Oklahoma DHS | W.D. Oklahoma (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | |
" The Court has serious reason to believe that Plaintiff used artificial intelligence tools to assist in drafting her objection. While the use of such tools is not prohibited, artificial intelligence often cites to legal authorities, like Cabrera, that do not exist. Continuing to cite to non-existent cases will result in sanctions up to and including dismissal. " |
||||||||
Rotonde v. Stewart Title Insurance Company | New York (USA) | 23 May 2025 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | |
Garner v. Kadince | Utah C.A. (USA) | 22 May 2025 | Lawyer | ChatGPT | Fabricated Legal Authorities | 1000 USD | ||
AI UseThe fabricated citations originated from a ChatGPT query submitted by an unlicensed law clerk at Petitioner's law firm. Neither Counsel reviewed the petition’s contents before filing. The firm had no AI use policy in place at the time, though they implemented one after the order to show cause was issued. Hallucination DetailsChief among the hallucinations was Royer v. Nelson, which Respondents demonstrated existed only in ChatGPT’s output and in no official database. Other cited cases were also inapposite or unverifiable. Petitioner’s counsel admitted fault and stated they were unaware AI had been used during drafting. Ruling/SanctionThe court issued three targeted sanctions:
Key Judicial ReasoningThe panel (Per Curiam) emphasized that the conduct, while not malicious, still diverted judicial resources and imposed unnecessary burdens on the opposing party. Unlike Mata or Hayes, the attorneys in this case quickly admitted the issue and cooperated, which the court acknowledged. Nonetheless, the submission of fabricated law—especially under counsel's signature—breaches core duties of candor and verification, warranting formal sanctions. The court warned that Utah’s judiciary cannot be expected to verify every citation and must be able to trust lawyers to do so |
||||||||
Evans et al v. Robertson et al | E.D. Michigan (USA) | 21 May 2025 | Pro Se Litigant | Implied | Non-existent or misrepresented cases | Warning | — | |
Happiness Idehen & Felix Ogieva v. Gloria Stoute-Phillip | N.Y. Civil Court (USA) | 21 May 2025 | Lawyer | Implied | At least 7 fabricated citations | Show cause proceedings that might lead to sanctions | — | |
Bauche v. Commissioner of Internal Revenue | US Tax Court (USA) | 20 May 2025 | Pro Se Litigant | Implied | Nonexistent cases | Warning | — | |
" While in our discretion we will not impose sanctions on petitioner, who is proceeding pro se, we warn petitioner that continuing to cite nonexistent caselaw could result in the imposition of sanctions in the future. " |
||||||||
Versant Funding v. Teras Breakbulk Ocean Navigation Enterprises | S.D. Florida (USA) | 20 May 2025 | Lawyer | Unidentified | 1 fabricated citation | Joint and several liability for Plaintiff’s attorneys' fees and costs incurred in addressing the hallucinated citation; CLE requirement on AI ethics; Monetary fines | 1500 USD | |
AI UseFirst Counsel, who had not previously used AI for legal work, used an unspecified AI tool to assist with drafting a response. He failed to verify the citation before submission. Second Counsel, as local counsel, filed the response without checking the content or accuracy, even though he signed the document. Second Counsel then said that he had initiated "procedural safeguards to prevent this error from happening again by ensuring he, and local counsel, undertake a comprehensive review of all citations and arguments filed with this and every court prior to submission to ensure their provenance can be traced to professional non-AI sources." Hallucination DetailsThe hallucinated case was cited as controlling Delaware authority on privilege assignments. When challenged by Plaintiff, Defendants initially filed a bare withdrawal without explanation. Only upon court order did they disclose the AI origin and acknowledge the error. Counsel personally apologized to the court and opposing counsel. Ruling/SanctionJudge William Matthewman imposed a multi-part sanction:
The Court emphasized that the submission of hallucinated citations—particularly when filed and signed by two attorneys—constitutes reckless disregard for procedural and ethical obligations. Though no bad faith was found, the conduct was sanctionable under Rule 11, § 1927, the Court’s inherent authority, and local professional responsibility rules. Key Judicial ReasoningThe Court distinguished this case from more egregious incidents (O’Brien v. Flick, Thomas v. Pangburn) because the attorneys admitted their error and did not lie or attempt to cover it up. However, the delay in correction and failure to check the citation in the first place were serious enough to warrant monetary penalties and educational obligations. |
||||||||
Johnson v. Dunn | N.D. Alabama (USA) | 16 May 2025 | Lawyer | ChatGPT | Fabricated citations | Order to Show Cause | — | |
In their Response, Counsel confessed to the use of AI tools in their Response to the OSC. (As recounted by Above the Law, the law firm involved quickly deleted a recent post they made about using AI.) |
||||||||
Keaau Development Partnership LLC v. Lawrence | Hawaii ICA (USA) | 15 May 2025 | Lawyer | Implied | One non-existent case with misattributed pinpoint citations from unrelated real cases | Monetary sanction against counsel personally; no disciplinary referral | 100 USD | |
AI UseCounsel filed a motion to dismiss appeal that cited “Greenspan v. Greenspan, 121 Hawai‘i 60, 71, 214 P.3d 557, 568 (App. 2009).” The court found that:
Ruling/Sanction
The amount reflects counsel’s candor and corrective measures, but the court noted that federal courts have imposed higher sanctions in similar cases. |
||||||||
Beenshoof v. Chin | W.D. Washington (USA) | 15 May 2025 | Lawyer | Implied | One non-existent case | No sanction imposed; court reminded Plaintiff of Rule 11 obligations | — | |
AI UseThe plaintiff, proceeding pro se, cited “Darling v. Linde, Inc., No. 21-cv-01258, 2023 WL 2320117 (D. Or. Feb. 28, 2023)” in briefing. The court stated it could not locate the case in any major legal database or via internet search and noted this could trigger Rule 11 sanctions if not based on a reasonable inquiry. The ruling cited Saxena v. Martinez-Hernandez as a cautionary example involving AI hallucinations, suggesting the court suspected similar conduct here. |
||||||||
USA v. Burke | M.D. Florida (USA) | 15 May 2025 | Lawyer | Westlaw's AI tools, GPT4.5 Deep Research (Pro) | Multiple fake citations and misquotations | Motion dismissed, and plaintiff ordered to refile it without fake citations. | — | |
Counsel later explained how the motion came to be: see here. |
||||||||
Ramirez v. Humala | E.D.N.Y. (USA) | 13 May 2025 | Paralegal | Unidentified | Four fabricated federal and state case citations | Monetary sanction jointly imposed on counsel and firm; order to inform client | 1000 USD | |
AI UseA paralegal used public search tools and unspecified “AI-based research assistants” to generate legal citations. The resulting hallucinated cases were passed to Counsel, who filed them without verification. Four out of eight cited cases were found to be fictitious:
Ruling/SanctionThe court imposed a $1,000 sanction against Counsel and her firm. Counsel was ordered to serve the sanction order on her client and file proof of service. The court declined harsher penalties, crediting her swift admission, apology, and internal reforms. Key Judicial ReasoningThe court found subjective bad faith due to the complete absence of verification. It cited a range of other AI-related sanction decisions, underscoring that even outsourcing to a “diligent and trusted” paralegal is not a defense when due diligence is absent. |
||||||||
Source: Volokh | ||||||||
In re Thomas Grant Neusom | M.D. Florida (USA) | 8 May 2025 | Lawyer | Unidentified | Multiple fictitious or misrepresented case citations | Suspension from practice before the Middle District of Florida for one year; immediate prohibition on accepting new federal matters; conditional reinstatement | — | |
(Grievance Committee Report available here.) AI UseNeusom told the grievance committee that he “may have used artificial intelligence” in preparing filings, and that any hallucinated cases were not deliberately fabricated but may have come from AI tools. The filings in question included a notice of removal and a motion for summary judgment. The judge later noted a pattern of citations inconsistent with established case law and unsupported by known databases. Hallucination DetailsCitations included cases that either did not exist or were grossly mischaracterized. Notably:
Neusom failed to produce the full texts of the cited cases when requested and instead filed a 721-page exhibit in violation of court orders. Ruling/SanctionThe court adopted the grievance committee’s recommendation and imposed a one-year suspension. Neusom is prohibited from accepting new federal cases in the Middle District of Florida during the suspension and must:
Key Judicial ReasoningThe court found that Neusom violated Rules 4-1.3, 4-3.3(a)(3), 4-3.4(c), and 4-8.4(c) of the Florida Rules of Professional Conduct. His failure to verify AI-generated content, compounded by noncompliance with orders and false statements to opposing counsel, demonstrated a pattern of recklessness and dishonesty. The court emphasized that federal proceedings require a high standard of diligence and that invoking AI cannot excuse failure to meet professional obligations. |
||||||||
Source: Natural & Artificial Intelligence in Law | ||||||||
Matter of Raven Investigations & Security Consulting | GAO (USA) | 7 May 2025 | Pro Se Litigant | Unidentified | Multiple fabricated citations to prior GAO decisions | Warning | — | |
AI UseGAO requested clarification after identifying case citation irregularities. The protester confirmed that their representative was not a licensed attorney and had relied on a combination of public tools, AI-based platforms, and secondary summaries, which produced fabricated or misattributed citations. Hallucination DetailsExamples included:
The fabrications mirrored patterns typical of AI hallucinations. Ruling/SanctionAlthough the protest was dismissed on academic grounds, GAO addressed the citation misconduct. It did not impose sanctions in this case but warned that future submission of non-existent authority could lead to formal disciplinary action—including dismissal, cost orders, and bar referrals (in the case of attorneys). |
||||||||
Lacey v. State Farm General Insurance | C.D. Cal (USA) | 6 May 2025 | Lawyer | CoCounsel, Westlaw Precision, Google Gemini | Nine citations incorrect or fabricated; multiple invented quotations from real or fictitious cases | Striking of briefs; denial of requested discovery relief; Large monetary sanctions jointly and severally against the two law firms | 31100 USD | |
AI UseCounsel used CoCounsel, Westlaw’s AI tools, and Google Gemini to generate a legal outline for a discovery-related supplemental brief. The outline contained hallucinated citations and quotations, which were incorporated into the filed brief by colleagues at both Ellis George and K&L Gates. No one verified the content before filing. After the Special Master flagged two issues, counsel refiled a revised brief—but it still included six AI-generated hallucinations and did not disclose AI use until ordered to respond. Hallucination DetailsAt least two cases did not exist at all, including a fabricated quotation attributed to Booth v. Allstate Ins. Co., 198 Cal.App.3d 1357 (1989). Misquoted or fabricated quotes attributed to National Steel Products Co. v. Superior Court, 164 Cal.App.3d 476 (1985). Several additional misquotes and garbled citations across three submitted versions of the brief. Revised versions attempted to silently “fix” errors without disclosing their origin in AI output. Ruling/SanctionThe Special Master (Judge Wilner) struck all versions of Plaintiff’s supplemental brief, denied the requested discovery relief, and imposed:
Key Judicial ReasoningThe submission and re-submission of AI-generated material without verification, especially after warning signs were raised, was deemed reckless and improper. The court emphasized that undisclosed AI use that results in fabricated law undermines judicial integrity. While individual attorneys were spared, the firms were sanctioned for systemic failure in verification and supervision. The Special Master underscored that the materials nearly made it into a judicial order, calling that prospect “scary” and demanding “strong deterrence.” |
||||||||
Rotonde v. Stewart Title Insurance Co | NY SC (USA) | 6 May 2025 | Pro Se Litigant | Implied | Several non-existent legal citations | Motion to dismiss granted in full; no sanction imposed, but court formally warned plaintiff | — | |
AI UseThe court observed that “some of the cases that plaintiff cites… do not exist,” and noted it had “tried, in vain,” to find them. While no explicit AI use is admitted by the plaintiff, the pattern and specificity of the fabricated citations are characteristic of LLM-generated hallucinations. Ruling/SanctionThe court dismissed all five causes of action—including negligence, tortious interference, aiding and abetting fraud, declaratory judgment, and breach of implied covenant of good faith and fair dealing—as either untimely or duplicative/deficient on the merits. It declined to impose sanctions but explicitly invoked Dowlah v. Professional Staff Congress, 227 AD3d 609 (1st Dept. 2024), and Will of Samuel, 82 Misc 3d 616 (Sur. Ct. 2024), to warn plaintiff that any future citation of fictitious cases would result in sanctions. Key Judicial ReasoningJustice Jamieson noted that while the court is “sensitive to plaintiff's pro se status,” that does not excuse disregard of procedural rules or the submission of fictitious citations. The court emphasized that its prior decision in related litigation in 2022 undermined plaintiff’s tolling claims, and that Executive Order extensions during the COVID-19 pandemic did not rescue otherwise-expired claims. The hallucinated citations failed to salvage plaintiff’s fraud and tolling theories, and their use was treated as an aggravating—though not yet sanctionable—factor. |
||||||||
X v. Board of Trustees of Governors State University | N.D. Illinois (USA) | 6 May 2025 | Pro Se Litigant | Implied | One fabricated citation | Warning | — | |
"For that principal [sic] [X] cites a case, Gunn v. McKinney, 259 F.3d 824, 829 (7th Cir. 2001), which neither defense counsel nor the Court has been able to locate. The Court reminds [X] that Federal Rule of Civil Procedure 11 applies to pro se litigants, and sanctions may result from such conduct, especially if the citation to Gunn was not merely a typographical or citation error but instead referred to a non-existent case. By presenting a pleading, written motion, or other paper to the Court, an unrepresented party acknowledges they will be held responsible for its contents. See Fed. R. Civ. P. 11(b)." |
||||||||
Harris v. Take-Two Interactive Software | D. Colorado (USA) | 6 May 2025 | Pro Se Litigant | Implied | Fabricated case law and quotations | Warning | — | |
Court held that: "The use of fictitious quotes or cases in filings may subject a party, including a pro se party, to sanctions pursuant to Federal Rule of Civil Procedure 11 as “pro se litigants are subject to Rule 11 just as attorneys are.” |
||||||||
Flowz Digital v. Caroline Dalal | C.D. Cal (USA) | 5 May 2025 | Lawyer | Lexis+AI | Fabricated citation, and misrepresented precedents | Order to show cause | — | |
In their Response to the Order to show Cause, Counsel specified that they used Lexis+AI, and stressed that "LexisNexis itself has publicly emphasized the reliability of its Lexis+ AI platform, marketing it as providing “hallucination-free legal citations” specifically to avoid citation errors." |
||||||||
Gustafson v. Amazon.com | D. Arizona (USA) | 30 April 2025 | Pro Se Litigant | Implied | One fake case | Warning | — | |
Moales v. Land Rover Cherry Hill | D. Connecticut (USA) | 30 April 2025 | Pro Se Litigant | Unidentified | Misrepresentation of several key federal securities law precedents | Plaintiff warned to ensure accuracy of future submissions | — | |
AI UseThe court stated that “Moales may have used artificial intelligence in drafting his submissions,” citing widespread concerns over AI hallucination. It noted that several citations in his complaint and show-cause response were plainly incorrect or irrelevant. While Moales did not admit AI use, the court cited Strong v. Rushmore Loan Mgmt. Servs., 2025 WL 100904 (D. Neb.) and Mata v. Avianca to contextualize its concern. Hallucination DetailsCited Ernst & Ernst v. Hochfelder, 425 U.S. 185 (1976), and S.E.C. v. W.J. Howey Co., 328 U.S. 293 (1946) as supporting the existence of a federal common law fiduciary duty—an inaccurate legal proposition. The court characterized such misuses as “the norm rather than the exception” in Moales’s submissions. It stopped short of identifying all misused authorities but made clear that the inaccuracies were pervasive. Ruling/SanctionThe complaint was dismissed for lack of subject matter jurisdiction under Rule 12(h)(3). Moales was permitted to file an amended complaint by May 28, 2025, but was warned that future filings must be factually and legally accurate. The court declined to reach the venue issue or impose immediate sanctions but warned Moales that misrepresentation of law may violate Rule 11. Key Judicial ReasoningThe court found no basis for federal question jurisdiction and rejected Moales’s reliance on the Declaratory Judgment Act, constructive trust theories, and a nonexistent “federal common law of securities.” It also held that Moales failed to plausibly allege the amount in controversy necessary for diversity jurisdiction. |
||||||||
Benjamin v. Costco Wholesale Corp | E.D.N.Y. (USA) | 24 April 2025 | Lawyer | ChatOn | Five fabricated case citations, and quotations | Monetary sanction; public reprimand; order to serve client with decision; no disciplinary referral due to candor and remediation | 1000 USD | |
AI UseCounsel used ChatOn to rewrite a reply brief with case law, under time pressure, without verifying the outputs. The five cases did not exist; citations were entirely fictional. Counsel later admitted this in a sworn declaration and at hearing, describing her actions as a lapse caused by workload and inexperience with AI. Hallucination DetailsFabricated cases included:
None of these cases matched any legal source. Counsel filed them as part of a sworn statement under penalty of perjury. Ruling/SanctionThe court imposed a $1,000 sanction payable to the Clerk; ordered the counsel to serve the order on her client and file proof of service. The court acknowledged her sincere remorse and remedial CLE activity, but emphasized the seriousness of submitting hallucinated cases under oath. Sanctions were tailored for deterrence, not punishment. Key Judicial ReasoningQuoting Park v. Kim and Mata v. Avianca, the court held that submitting legal claims based on nonexistent authorities without checking them constitutes subjective bad faith. Signing a sworn filing without knowledge of its truth is independently sanctionable. Time pressure is not a defense. Lawyers cannot outsource core duties to generative AI and disclaim responsibility for the results. |
||||||||
Coomer v. My Pillow, Inc. | D. Colorado (USA) | 23 April 2025 | Lawyer | Unidentified | Nearly thirty defective citations | Order to Show Cause re Sanctions + Potential Referral for Professional Discipline | — | |
Source: Volokh | ||||||||
Nichols v. Walmart | S.D. Georgia (USA) | 23 April 2025 | Pro Se Litigant | Implied | Multiple fictitious legal citations | Case dismissed for lack of subject matter jurisdiction and as a Rule 11 sanction for bad-faith submission of fabricated legal authorities | — | |
AI UsePlaintiff submitted a motion to disqualify opposing counsel that cited multiple non-existent cases. She offered no clarification about how the citations were obtained or whether she had attempted to verify them. The court noted this failure and declined to excuse the misconduct, though it stopped short of attributing it directly to AI tools. Hallucination DetailsThe court reviewed Plaintiff’s motion and found that some of the cited cases did not exist. Despite being ordered to show cause, Plaintiff responded only with general statements about her good faith and complaints about perceived procedural unfairness, without addressing the origin or verification of the fake cases. Ruling/SanctionThe court dismissed the case for lack of subject matter jurisdiction and independently dismissed it as a sanction for bad-faith litigation under Rule 11. It found Plaintiff’s conduct—submitting fictitious legal authorities and refusing to take responsibility for them—warranted dismissal, even if monetary sanctions were not appropriate. The court cited Mata v. Avianca, Morgan v. Community Against Violence, and O’Brien v. Flick as relevant precedents affirming the sanctionability of hallucinated case law. Key Judicial ReasoningJudge Hall held that Plaintiff’s conduct went beyond excusable error. Her submission of fabricated cases, refusal to explain their origin, and attempts to shift blame to perceived procedural grievances demonstrated bad faith. The court concluded that dismissal—though duplicative of the jurisdictional ground—was warranted as a standalone sanction to deter future abuse by similarly situated litigants. |
||||||||
Brown v. Patel et al. | S.D. Texas (USA) | 22 April 2025 | Pro Se Litigant | Unidentified | 5 non-existent cases and misrepresentation of three others | Warning | — | |
Although no immediate sanctions were imposed, Magistrate Judge Ho explicitly warned Plaintiff that future misconduct of this nature may violate Rule 11 and lead to consequences. |
||||||||
Ferris v. Amazon.com Services | N.D. Mississippi (USA) | 16 April 2025 | Pro Se Litigant | ChatGPT | 7 fictitious cases | Plaintiff ordered to pay Defendant’s reasonable costs related to addressing the fabricated citations | — | |
AI UseMr. Ferris admitted at the April 8, 2025 hearing that he used ChatGPT to generate the legal content of his filings and even the statement he read aloud in court. The filings included at least seven entirely fictitious case citations. The court noted the imbalance: it takes a click to generate AI content but substantial time and labor for courts and opposing counsel to uncover the fabrications. Hallucination DetailsThe hallucinated cases included federal circuit and district court decisions, complete with plausible citations and jurisdictional diversity, crafted to lend credibility to Plaintiff’s intellectual property and employment-related claims. These false authorities were submitted both in the complaint and in opposition to Amazon’s motion to dismiss. Ruling/SanctionThe court found a Rule 11 violation and, while initially inclined to dismiss the case outright, chose instead to impose a compensatory monetary sanction. Amazon is entitled to submit a detailed affidavit of costs directly attributable to rebutting the false citations. The final monetary amount will be set in a subsequent order. Key Judicial ReasoningJudge Michael P. Mills condemned the misuse of generative AI as a serious threat to judicial integrity. Quoting Kafka (“The lie made into the rule of the world”), the court lamented the rise of “a post-truth world” and framed Ferris as an “avatar” of that dynamic. Nevertheless, it opted for the least severe sanction consistent with deterrence and fairness: compensatory costs under Rule 11. |
||||||||
Sims v. Souily-Lefave | D. Nevada (USA) | 15 April 2025 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
Bevins v. Colgate-Palmolive Co. | E.D. Pa. (USA) | 10 April 2025 | Lawyer | Unidentified | 2 fake case citations and misstatements | Striking of Counsel’s Appearance + Referral to Bar Authorities + Client Notification Order | — | |
AI UseCounsel filed opposition briefs citing two nonexistent cases. The court suspected generative AI use based on "hallucination" patterns but Counsel neither admitted nor explained the citations satisfactorily. Failure to comply with a standing AI order aggravated sanctions. Hallucination DetailsTwo fake cases cited. Citation numbers and Westlaw references pointed to irrelevant or unrelated cases. No affidavit or real case documents were produced when ordered. Ruling/SanctionCounsel's appearance was struck with prejudice. The Court ordered notification to the State Bar of Pennsylvania and the Eastern District Bar. Consel was required to inform his client, Bevins, of the sanctions and the need for new counsel if re-filing. |
||||||||
Bischoff v. South Carolina Department of Education | Admin Law Court, S.C. (USA) | 10 April 2025 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
The court held that: "It is likely that Appellant employed argument generated by an artificial intelligence (AI) program which contained the fictitious case citation and cautions Appellant that many harms flow from the use of non-existent case citations and fake legal authority generated by AI programs, including but not limited to the waste of judicial resources and time and waste of resources and time of the opposing party. Were courts to unknowingly rely upon fictitious citations, citizens and future litigants might question the validity of court decisions and the reputation of judges. If, alternatively, Appellant's use of a fictitious case was not the result of using an AI program, but was instead a conscious act of the Appellant, Appellant's action could be deemed a fraud on the Court. Appellant is hereby expressly warned that submission of fictitious case authorities may subject Appellant to sanctions under the S.C. Frivolous Proceedings Act, S.C. Code Ann. § 15-36-10(Supp. 2024)." |
||||||||
Daniel Jaiyong An v. Archblock, Inc. | Delaware Chancery (USA) | 3 April 2025 | Pro Se Litigant | Implied | At least three fabricated or misattributed case citations and multiple false quotations | Motion denied with prejudice; no immediate sanction imposed, but petitioner formally warned and subject to future certification and sanctions | — | |
AI UseThe petitioner submitted a motion to compel discovery that contained several fabricated or misleading citations. The court explicitly stated that the motion bore hallmarks of generative AI use and referenced ChatGPT’s known risk of “hallucinations.” Although the petitioner did not admit AI use, the court found the origin clear and required future filings to include a GenAI usage certification. Hallucination DetailsExamples included:
Court verified via Westlaw that some phrases returned only the petitioner’s motion as a result. Ruling/SanctionMotion to compel denied with prejudice. No immediate monetary sanction imposed, but petitioner was warned that further submission of fabricated authority may result in sanctions including monetary penalties or dismissal. Future filings must include a certification regarding the use of generative AI. Key Judicial ReasoningThe Vice Chancellor emphasized that GenAI can benefit courts and litigants, but careless use that results in fictitious legal authorities wastes resources and harms judicial integrity. |
||||||||
Dehghani v. Castro | New Mexico DC (USA) | 2 April 2025 | Lawyer | Unidentified | At least 6 entirely fictitious case citations in a habeas corpus filing | Monetary sanction; required CLE on legal ethics and AI; mandatory self-reporting to NM and TX state bars; report of subcontractor to NY state bar; required notification to LAWCLERK | 1500 USD | |
AI UseCounsel hired a freelance attorney through LAWCLERK to prepare a filing. He made minimal edits and admitted not verifying any of the case law before signing. The filing included multiple fabricated cases and misquoted others. The court concluded these were AI hallucinations, likely produced by ChatGPT or similar. Hallucination DetailsExamples of non-existent cases cited include: Moncada v. Ruiz, Vega-Mendoza v. Homeland Security, Morales v. ICE Field Office Director, Meza v. United States Attorney General, Hernandez v. Sessions, and Ramirez v. DHS. All were either entirely fictitious or misquoted real decisions. Ruling/SanctionThe Court sanctioned Counsel by:
Key Judicial ReasoningThe court emphasized that counsel’s failure to verify cited cases, coupled with blind reliance on subcontracted work, constituted a violation of Rule 11(b)(2). The court analogized to other AI-sanctions cases. While the fine was modest, the court imposed significant procedural obligations to ensure deterrence. |
||||||||
Sanders v. United States | Fed. claims court (USA) | 31 March 2025 | Pro Se Litigant | Implied | 5 fabricated citations | Warning | — | |
AI UseThe plaintiff did not admit to using AI, but the court inferred likely use due to the submission of fabricated citations matching the structure and behavior typical of generative AI hallucinations. The decision referenced public concerns about AI misuse and cited specific examples of federal cases where similar misconduct occurred. Hallucination DetailsPlaintiff cited:
Ruling/SanctionThe court granted the government’s motion to dismiss for lack of subject matter jurisdiction under Rule 12(b)(1). Although the court found a clear Rule 11 violation, it opted not to sanction the plaintiff, citing the evolving context of AI use and the absence of bad faith. A formal warning was issued, with notice that future hallucinated filings may trigger sanctions. Key Judicial ReasoningJudge Roumel noted that plaintiff’s attempt to rely on fictional case law was a misuse of judicial resources and a disservice to her own advocacy. The court cited multiple precedents addressing hallucinated citations and AI misuse, stating clearly that while leeway is granted to pro se litigants, the line is crossed when filings rely on fictitious law. |
||||||||
McKeown v. Paycom Payroll LLC | W.D. Oklahoma (USA) | 31 March 2025 | Pro Se Litigant | Implied | Several fake citations | Submission stricken out, and warning | — | |
AI UseAlthough AI was not named and Plaintiff denied intentional fabrication, the court considered the citation (Adamov, 779 F.3d 851, 860 (8th Cir. 2015)) to be plainly fictitious. It noted the possibility that Plaintiff used generative AI tools, given the fabricated citation's plausible-sounding structure and mismatch with existing precedent. Hallucination DetailsPlaintiff submitted fabricated legal authorities in at least two filings, despite being explicitly warned by the court after the first incident. The false case cited in her sur-reply could not be located in any legal database. When asked to produce it, she responded that she had likely “garbled” the citation but provided no plausible alternative or correction. Ruling/SanctionThe court declined to dismiss the action as a sanction, citing the limitations pro se litigants face in accessing reliable legal research tools. However, it granted the defendant’s motion to strike Plaintiff’s two unauthorized sur-replies and formally warned her that further violations of Rule 11 would lead to sanctions, including monetary penalties, filing restrictions, or dismissal. |
||||||||
Kruglyak v. Home Depot U.S.A., Inc. | W.D. Virginia (USA) | 25 March 2025 | Pro Se Litigant | ChatGPT | Multiple fictitious case citations and misrepresentations | No monetary sanctions; Warning | — | |
AI UseKruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources. Hallucination DetailsThe plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones. Ruling/SanctionMagistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur. Key Judicial ReasoningThe court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action. |
||||||||
Buckner v. Hilton Global | W.D. Kentucky (USA) | 21 March 2025 | Pro Se Litigant | Implied | At least 2 fake citations | Warning | — | |
Stevens v. BJC Health System | Missouri CA (USA) | 18 March 2025 | Pro Se Litigant | Implied | 6 fabricated citations | Warning | — | |
Alkuda v. McDonald Hopkins Co., L.P.A. | N.D. Ohio (USA) | 18 March 2025 | Pro Se Litigant | Implied | Fake Citations | Warning | — | |
Arnaoudoff v. Tivity Health Incorporated | D. Arizona (USA) | 11 March 2025 | Pro Se Litigant | ChatGPT | Fake citations | Court ignored fake citations and granted motion to correct the record | — | |
Sheets v. Presseller | M.D. Florida (USA) | 11 March 2025 | Pro Se Litigant | Implied | Allegations by the other party that brief was AI-generated | Warning | — | |
210S LLC v. Di Wu | Hawaii (USA) | 11 March 2025 | Pro Se Litigant | Implied | Fictitious citation and misrepresentation | Warning | — | |
Nguyen v. Wheeler | E.D. Arkansas (USA) | 3 March 2025 | Lawyer | Implied | 4 fictitious case citations, with fabricated quotes | Monetary sanction | 1000 USD | |
AI UseNguyen did not confirm which AI tool was used but acknowledged that AI “may have contributed.” The court inferred the use of generative AI from the pattern of hallucinated citations and accepted Nguyen’s candid acknowledgment of error, though this did not excuse the Rule 11 violation. Hallucination DetailsFictitious citations included:
None of these cases existed in Westlaw or Lexis, and the quotes attributed to them were fabricated. Outcome / SanctionThe court imposed a $1,000 monetary sanction on Counsel for citing non-existent case law in violation of Rule 11(b). It found her conduct unjustified, despite her apology and explanation that AI may have been involved. The court emphasized that citing fake legal authorities is an abuse of the adversary system and warrants sanctions. |
||||||||
Bunce v. Visual Technology Innovations, Inc. | E.D. Pa. (USA) | 27 February 2025 | Lawyer | ChatGPT | 2 fake case citations + citation of vacated and inapposite cases. | Monetary Sanction + Mandatory CLE on AI and Legal Ethics | 2500 USD | |
AI UseCounsel admitted using ChatGPT to draft two motions (Motion to Withdraw and Motion for Leave to Appeal), without verifying the cases or researching the AI tool’s reliability. Hallucination Details2 Fake cases:
Misused cases:
Ruling/SanctionThe Court sanctioned Counsel $2,500 payable to the court and ordered him to complete at least one hour of CLE on AI and legal ethics. The opinion emphasized that deterrence applied both specifically to Counsel and generally to the profession. Key Judicial ReasoningRule 11(b)(2) mandates reasonable inquiry into all legal contentions. No AI tool displaces the attorney’s personal duty. Novelty of AI tools is not a defense. |
||||||||
Merz v. Kalama | W.D. Washington (USA) | 25 February 2025 | Pro Se Litigant | Unidentified | Wrong legal advice | — | ||
Wadsworth v. Walmart (Morgan & Morgan) | Wyoming (USA) | 24 February 2025 | Lawyer | Internal tool (ChatGPT) | 8 of 9 Fake/Flawed Cases | $3k Fine + Pro Hac Vice Revoked (Drafter); $1k Fine each (Signers); Remedial actions noted. | 5000 USD | |
AI UseCounsel from Morgan & Morgan used the firm's internal AI platform (MX2.law, reportedly using ChatGPT) to add case law support to draft motions in limine in a product liability case concerning a hoverboard fire. This was reportedly his first time using AI for this purpose. Hallucination DetailsEight out of nine case citations in the filed motions were non-existent or led to differently named cases. Another cited case number was real but belonged to a different case with a different judge. The legal standard description was also deemed "peculiar". Ruling/SanctionAfter defense counsel raised issues, the Judge issued an order to show cause. The plaintiffs' attorneys admitted the error, withdrew the motions, apologized, paid opposing counsel's fees related to the motions, and reported implementing new internal firm policies and training on AI use. Judge Rankin found Rule 11 violations. Sanctions imposed were: $3,000 fine on the drafter and revocation of his pro hac vice admission; $1,000 fine each on the signing attorneys for failing their duty of reasonable inquiry before signing. Key Judicial ReasoningThe court acknowledged the attorneys' remedial steps and honesty but emphasized the non-delegable duty under Rule 11 to make a reasonable inquiry into the law before signing any filing. The court stressed that while AI can be a tool, attorneys remain responsible for verifying its output. The judge noted this was the "latest reminder to not blindly rely on AI platforms' citations". |
||||||||
Saxena v. Martinez-Hernandez et al. | D. Nev. (USA) | 18 February 2025 | Pro Se Litigant | Implied | At least two fabricated citations. | Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor | — | |
AI UseThe plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them. Hallucination Details
The court found no plausible explanation for these citations other than AI generation or outright fabrication. Ruling/SanctionThe court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith. Key Judicial ReasoningCiting Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief". |
||||||||
Fora Financial Asset Securitization v. Teona Ostrov Public Relations | NY SC (USA) | 24 January 2025 | Lawyer | Implied | Several fake citations, 1 fake quotation | No sanction imposed; court struck the offending citations and warned that repeated occurrences may result in sanctions | — | |
AI UseThe court noted “problems with several citations leading to different or non-existent cases and a quotation that did not appear in any cases cited” in defendants’ reply papers. While the court did not identify AI explicitly, it flagged the issue and indicated that repeated infractions could lead to sanctions. Ruling/SanctionNo immediate sanction. The court granted plaintiff’s motion in part, striking thirteen of eighteen affirmative defenses. It emphasized that if citation issues persist, sanctions will follow. |
||||||||
Strike 3 Holdings LLC v. Doe | C.D. California (USA) | 22 January 2025 | Lawyer | Ulokued | 3 entirely fictitious cases | — | ||
Key Judicial ReasoningMagistrate Judge Sheri Pym found the motion legally deficient on multiple grounds. In addition, she emphasized that counsel must not rely on fake or unverified authority. She cited Mata, Park, Gauthier, and others as cautionary examples of courts imposing sanctions for AI-generated hallucinations. The court reaffirmed that the use of AI does not lessen the duty to verify the existence and relevance of cited law. |
||||||||
Arajuo v. Wedelstadt et al | E.D. Wisconsin (USA) | 22 January 2025 | Lawyer | Unidentified | Multiple non-existent cases | Warning | — | |
AI UseCounsel admitted using a “new legal research medium”, appears to be a generative AI system or platform capable of generating fictitious case law. Counsel did not deny using AI, but claimed the system may have been corrupted or unreliable. The amended filing removed the false authorities. Hallucination DetailsThe court did not identify the specific fake cases but confirmed that “citations to non-existent cases” were included in Defendants’ original brief. Counsel’s subsequent filing corrected the record but did not explain how the citations passed into the brief in the first place. Ruling/SanctionJudge William Griesbach denied the motion for summary judgment on the merits, but addressed the citation misconduct separately. He cited Rule 11 and Park v. Kim (91 F.4th 610, 615 (2d Cir. 2024)) to underline the duty to verify. No formal sanctions were imposed, but counsel was explicitly warned that further use of non-existent authorities would not be tolerated. Key Judicial ReasoningThe court emphasized that even if the submission of false citations was not malicious, it was still a serious breach of Rule 11 obligations. Legal contentions must be “warranted by existing law,” and attorneys are expected to read and confirm cited cases. The failure to do so, even if caused by AI use, is unacceptable. The court accepted counsel’s corrective effort but insisted that future violations would be sanctionable. |
||||||||
United States v. Hayes | E.D. Cal. (USA) | 17 January 2025 | Federal Defender | Unidentified | One fake case citation with fabricated quotation | Formal Sanction Imposed + Written Reprimand | — | |
AI UseDefense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible. Hallucination DetailsFrancisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere. Ruling/SanctionThe Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted. Key Judicial ReasoningThe court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record. |
||||||||
Source: Volokh | ||||||||
Strong v. Rushmore Loan Management Services | D. Nebraska (USA) | 15 January 2025 | Pro Se Litigant | Implied | “highly suspicious” signs of generative AI use | Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions | — | |
Kohls v. Ellison | Minnesota (USA) | 10 January 2025 | Expert | GPT-4o | Fake Academic Citations | Expert Declaration Excluded | — | |
AI UseProfessor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections. Hallucination DetailsThe declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations. Ruling/SanctionProfessor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable. Key Judicial ReasoningThe court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted. |
||||||||
Source: Volokh | ||||||||
O’Brien v. Flick and Chamberlain | S.D. Florida (USA) | 10 January 2025 | Pro Se Litigant | Implied | 2 fabricated citations | Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations | — | |
AI UseAlthough O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.” Ruling/SanctionThe court dismissed the case with prejudice on dual grounds:
Key Judicial ReasoningJudge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers. |
||||||||
Al-Hamim v. Star Hearthstone | Colorado (USA) | 26 December 2024 | Pro Se Litigant | Unidentified | 8 Fake Cases | No Sanction (due to pro se, contrition, etc.); Warning of future sanctions. | — | |
AI UseAlim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court. Hallucination DetailsThe appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause. Ruling/SanctionAl-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions. Key Judicial ReasoningFactors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status. |
||||||||
Letts v. Avidien Technologies | E.D. N. Carolina (USA) | 16 December 2024 | Pro Se Litigant | Implied | Multiple non-existent or misattributed court decisions | Warning | — | |
Mojtabavi v. Blinken | C.D. California (USA) | 12 December 2024 | Pro Se Litigant | Unidentified | Multiple fake cases | Case dismissed with prejudice | — | |
Carlos E. Gutierrez v. In Re Noemi D. Gutierrez | Fl. 3rd District CA (USA) | 4 December 2024 | Pro Se Litigant | Unidentified | Numerous fabricated Florida case citations with invented quotations | Appeals dismissed as sanction; Appellant barred from future pro se filings in related probate matters without attorney signature | — | |
AI UseThe court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.” Hallucination DetailsThe court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated. Ruling/SanctionDismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings Key Judicial ReasoningThe Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations. |
||||||||
Rubio v. District of Columbia DHS | D.C. DC (USA) | 3 December 2024 | Pro Se Litigant | Unidentified | At least four fabricated case citations | Complaint dismissed with prejudice; no Rule 11 sanctions imposed, but clear judicial warning on AI misuse and citation verification duties | — | |
AI UsePlaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).” Hallucination DetailsThe court could not locate the following cited cases:
These were used to allege a pattern of constitutional violations by the District but were found to be fabricated. Ruling/SanctionThe court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse. Key Judicial ReasoningThe Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice. |
||||||||
Gauthier v. Goodyear Tire & Rubber Co. | E.D. Tex. (USA) | 25 November 2024 | Lawyer | Claude | Two nonexistent cases + multiple fabricated quotations | Monetary fine + Mandatory AI-related CLE Course + Disclosure to Client | 2000 USD | |
AI UseMonk admitted using the Claude AI tool to draft a summary judgment opposition without adequately verifying the case citations or quotations. He later claimed to have attempted post-hoc verification through Lexis AI but did not correct the errors until after a judicial show cause order. Hallucination DetailsCited two completely nonexistent cases. Also fabricated quotations attributed to real cases, including Morales v. SimuFlite, White v. FCI USA, Burton v. Freescale, among others. Several "quotes" did not appear anywhere in the cited opinions. Ruling/SanctionThe court imposed a $2,000 fine, ordered Monk to complete at least one hour of CLE on generative AI in legal practice, and mandated formal disclosure of the sanctions order to his client. It also permitted amendment of the defective filing but warned of the severity of the misconduct. Key Judicial ReasoningThe court emphasized that attorneys remain personally responsible for the verification of all filings under Rule 11, regardless of technology used. Use of AI does not dilute the duty of candor. Continued silence and failure to rectify errors after opposing counsel flagged them exacerbated the misconduct. |
||||||||
Leslie v. IQ Data International | N.D. Georgia (USA) | 24 November 2024 | Pro Se Litigant | Implied | Citation to nonexistent authorities | Background action dismissed with prejudice, but no monetary sanction | — | |
Berry v. Stewart | D. Kansas (USA) | 14 November 2024 | Lawyer | Unidentified | Fabrication citations, and wrong reference to case's evidence | At hearing, Counsel pledged to reimburse other side and his client | — | |
In the November 2024 Show Cause Order, Judge Robinson noted that: "First, the briefing does not cite the forum-selection clause from the contract between the parties; instead, it cites and quotes a forum-selection clause that appears nowhere in the papers submitted by the parties. Second, Defendant’s reply brief includes a citation, Hogan v. Allstate Insurance Co., No. 19-CV-00262-JPM, 2020 WL 1882334 (D. Kan. Apr. 15, 2020), in which the court purportedly “transferred a case to the Southern District of Texas because the majority of the witnesses were located in Texas. The court found that the burden on the witnesses outweighed the convenience of litigating the case in Kansas.” As far as the Court can tell, this case does not exist. The Westlaw database number pulls up no case; the Court has found no case in CM/ECF between the parties “Hogan” and “Allstate Insurance Co.” Moreover, docket numbers in this district have at least four digits—not three—after the case-type designation, and there is no judge in this district with the initials “JPM.”" During the show cause hearing (Transcript), Counsel apologised and pledged to reimburse the other side's costs, as well as his client's. |
||||||||
Vargas v. Salazar | S.D. Texas (USA) | 1 November 2024 | Pro Se Litigant | Implied | Fake citations | Plaintiff ordered to refile submissions without fake citations | — | |
Churchill Funding v. 732 Indiana | SC Cal (USA) | 31 October 2024 | Lawyer | Implied | Two fabricated citations | Order to show cause | — | |
Source: Volokh | ||||||||
Mortazavi v. Booz Allen Hamilton, Inc. | C.D. Cal. (USA) | 30 October 2024 | Lawyer | Unidentified | 1 fake case + fabricated factual allegations. | $2,500 Monetary Sanction + Mandatory Disclosure to California State Bar | — | |
AI UsePlaintiff’s counsel admitted using generative AI to draft a motion to remand without independently verifying the legal citations or the factual accuracy of quoted complaint allegations. Hallucination DetailsCited a fabricated case (details of the specific case name not listed in the ruling). Included fabricated quotations from the complaint, suggesting nonexistent factual allegations. Ruling/SanctionThe Court imposed a $2,500 sanction payable by December 30, 2024. Counsel was also required to notify the California State Bar of the sanction and file proof of notification and payment. The Court recognized mitigating factors (health issues, post-hoc corrective measures) but stressed the seriousness of the violations. Key Judicial ReasoningRule 11 requires attorneys to conduct a reasonable inquiry into both facts and law. Use of AI does not diminish this duty. Subjective good faith is irrelevant: violations occur even without intent to deceive. AI-generated filings must be reviewed with the same rigor as traditional submissions. |
||||||||
Matter of Weber | NY County Court (USA) | 10 October 2024 | Expert | MS Copilot | Unverifiable AI Calculation Process | AI-assisted Evidence Inadmissible; Affirmative Duty to Disclose AI Use for Evidence Established. | — | |
AI UseIn a trust accounting proceeding, the objectant's damages expert testified that he used Microsoft Copilot (described as an AI chatbot) to cross-check his damages calculations presented in a supplemental report. Hallucination DetailsThe issue wasn't fabricated citations, but the reliability and verifiability of the AI's calculation process. The expert could not recall the specific prompts used, nor could he explain Copilot's underlying sources or methodology. He claimed using AI tools was generally accepted in his field but offered no proof. Ruling/SanctionThe court had already found the expert's analysis unreliable on other grounds, but specifically addressed the AI use. The court attempted to replicate the expert's results using Copilot itself, obtaining different outputs and eliciting warnings from Copilot about the need for expert verification before court use. The court held, potentially as an issue of first impression in that court, that counsel has an affirmative duty to disclose the use of AI in generating evidence prior to its introduction, due to AI's rapid evolution and reliability issues. AI-generated evidence would be subject to a Frye hearing (standard for admissibility of scientific evidence in NY). The expert's AI-assisted calculations were deemed inadmissible. Key Judicial ReasoningThe court emphasized the "garbage in, garbage out" principle, stressing the need for users to understand AI inputs and processes. It stated that the mere fact AI is used does not make its output admissible; reliability must be established. The lack of transparency regarding the AI's process was a key factor in finding the evidence unreliable. |
||||||||
Jones v. Simploy | Missouri CA (USA) | 24 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court. We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54. We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity." |
||||||||
Martin v. Hawai | D. Hawaii (USA) | 20 September 2024 | Pro Se Litigant | Unidentified | Many fictitious citations | Warning, and Order to file further submissions with Declaration | — | |
Transamerica Life v. Williams | D. Arizona (USA) | 6 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
Rule v. Braiman | N.D. New York (USA) | 4 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
USA v. Michel | D.C. (USA) | 30 August 2024 | Lawyer | EyeLevel | Misattributed Song | Misattribution was irrelevant | — | |
As acknowledged by Counsel, he also used AI to generate parts of his pleadings. |
||||||||
Rasmussen v. Rasmussen | California (USA) | 23 August 2024 | Lawyer | Implied | 3 miscited cases and 4 non-existent ones | Lawyer ordered to show cause why she should not be referred to the bar | — | |
While the Court initially organised show cause proceedings leading to potential sanctions, the case was eventually settled. Nevertheless, the Court stated that it "intends to report Ms. Rasmussen’s use of mis-cited and nonexistent cases in the demurrer to the State Bar", unless she objected to "this tentative ruling". |
||||||||
N.E.W. Credit Union v. Mehlhorn | Wisconsin C.A. (USA) | 13 August 2024 | Pro Se Litigant | Implied | At least four fictitious cases | Warning | — | |
The court pointed out: "In its brief, the Credit Union points out that the cases cited by Mehlhorn do not exist and speculates that Mehlhorn used an artificial intelligence program to draft her brief-in-chief. In her reply brief, Mehlhorn does not respond to this assertion. Instead, she cites eight new cases, none of which were referenced in her brief-in-chief. It appears, however, that four of those cases are also fictitious. At a minimum, this court cannot locate those cases using the citations provided. We strongly admonish Mehlhorn for her violations of the Rules of Appellate procedure, and particularly for her citations to what appear to be fictitious cases. Although Mehlhorn is self-represented, pro se appellants “are bound by the same rules that apply to attorneys on appeal.” See Waushara County v. Graf, 166 Wis. 2d 442, 452, 480 N.W.2d 16 (1992). We could summarily dismiss this appeal as a sanction for Mehlhorn’s multiple and egregious rule violations. See WIS. STAT. RULE 809.83(2). Nevertheless, we choose to address the merits of Mehlhorn’s arguments as best as we are able, given the deficiencies in her briefing" |
||||||||
Dukuray v. Experian Information Solutions, Inc. | S.D.N.Y. (USA) | 26 July 2024 | Pro Se Litigant | Unidentified | 3 fake case citations and fabricated case law descriptions | No sanction; Formal Warning Issued | — | |
AI UsePlaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation. Hallucination DetailsThree nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues. Ruling/SanctionThe court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks. Key Judicial ReasoningReliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused. |
||||||||
Joe W. Byrd v. Woodland Springs HA | Texas CA (USA) | 25 July 2024 | Pro Se Litigant | Unidentified | Several garbled or misattributed case citations and vague legal references | No formal sanction | — | |
AI UseThe court does not confirm AI use but references a legal article about the dangers of ChatGPT and states: “We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.” Ruling/SanctionThe court affirmed the trial court’s judgment, found no preserved or adequately briefed grounds for appeal, and declined to address the vague or unsupported references. No explicit sanction or costs were imposed for the apparent AI-related deficiencies. |
||||||||
Iovino v. Michael Stapleton Associates, Ltd. | Western Virginia (USA) | 24 July 2024 | Lawyer | Claude + Westlaw / LexisNexis | 2 fake cases + fabricated quotations attributed to real cases | Show Cause Order re Potential Sanctions + Possible Bar Referral | — | |
AI UseThe court inferred the use of AI from the pattern of errors (fake cases and fabricated quotes) and opposing counsel’s explicit accusation ("ChatGPT run amok"). Plaintiff's counsel did not deny it or clarify origins, leaving the inference unchallenged. Hallucination DetailsTwo nonexistent cases cited, and fabricated quotations attributed to real cases:
Misreporting of Menocal case citation to imply relevance Ruling/SanctionThe court issued a show cause order demanding explanation why sanctions and/or bar disciplinary referrals should not be imposed. Silent failure to contest fabrication allegations worsened the finding. Following show cause proceedings, the court declined to sanction counsel. Key Judicial ReasoningThe judge emphasized that AI use does not lessen the lawyer’s duty to ensure accurate filings. Fabricated cases and misquotes are serious Rule 11 violations. Attorneys are responsible for vetting everything submitted to the court, regardless of source. Silence when fabrication is exposed constitutes further misconduct. |
||||||||
Anonymous v. NYC Department of Education | S.D.N.Y. (USA) | 18 July 2024 | Pro Se Litigant | Unidentified | Several nonexistent case citations and fabricated quotations | No sanction; Formal Warning Issued | — | |
AI UseThe plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI. Hallucination DetailsSeveral fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious. Ruling/SanctionNo sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency. Key Judicial ReasoningThe court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct. |
||||||||
Dowlah v. Professional Staff Congress | NY SC (USA) | 30 May 2024 | Pro Se Litigant | Unidentified | Several non-existent cases | Caution to plaintiff | — | |
Plumbers & Gasfitters Union v. Morris Plumbing | E.D. Wisconsin (USA) | 18 April 2024 | Lawyer | Implied | 1 fake citation | Warning | — | |
Grant v. City of Long Beach | 9th Cir. CA (USA) | 22 March 2024 | Lawyer | Unidentified | 2 Fake Cases, plus flawed summaries | Striking of Brief + Dismissal of Appeal | — | |
AI UseThe appellants’ lawyer submitted an opening brief riddled with hallucinated cases and mischaracterizations. The court did not directly investigate the technological origin but cited the systematic errors as consistent with known AI-generated hallucination patterns. Hallucination DetailsTwo cited cases were completely nonexistent. Additionally, a dozen cited decisions were badly misrepresented, e.g., Hydrick v. Hunter and Wall v. County of Orange were cited for parent–child removal claims when they had nothing to do with such issues. Ruling/SanctionThe Ninth Circuit struck the appellants' opening brief under Circuit Rule 28–1 and dismissed the appeal. The panel emphasized that fabricated citations and grotesque misrepresentations violate Rule 28(a)(8)(A) requirements for arguments with coherent citation support. |
||||||||
Michael Cohen Matter | SDNY (USA) | 20 March 2024 | Pro Se Litigant | Google Bard | 3 fake cases | No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied | — | |
AI UseMichael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases. Hallucination DetailsCohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility. Ruling/SanctionJudge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits. Key Judicial ReasoningThe court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification. |
||||||||
Martin v. Taylor County | N.D. Texas (USA) | 6 March 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | |
In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time." |
||||||||
Kruse v. Karlen | Miss. CA (USA) | 13 February 2024 | Pro Se Litigant | Unidentified | At least twenty-two fabricated case citations and multiple statutory misstatements. | Dismissal of Appeal + Damages Awarded for Frivolous Appeal. | 10000 USD | |
AI UseAppellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission. Hallucination DetailsOut of twenty-four total case citations in Karlen’s appellate brief:
Ruling/SanctionThe Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status. Key Judicial ReasoningThe Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers. |
||||||||
Smith v. Farwell | Massachusetts (USA) | 12 February 2024 | Lawyer | Unidentified | 3 fake cases | Monetary Fine (Supervising Lawyer) | 2000 USD | |
AI UseIn a wrongful death case, plaintiff's counsel filed four memoranda opposing motions to dismiss. The drafting was done by junior staff (an associate and two recent law school graduates not yet admitted to the bar) who used an unidentified AI system to locate supporting authorities. The supervising attorney signed the filings after reviewing them for style and grammar, but admittedly did not check the accuracy of the citations and was unaware AI had been used. Hallucination DetailsJudge Brian A. Davis noticed citations "seemed amiss" and, after investigation, could not locate three cases cited in the memoranda. These were fictitious federal and state case citations. Ruling/SanctionAfter being questioned, the supervising attorney promptly investigated, admitted the citations were fake and AI-generated, expressed sincere contrition, and explained his lack of familiarity with AI risks. Despite accepting the attorney's candor and lack of intent to mislead, Judge Davis imposed a $2,000 monetary sanction on the supervising counsel, payable to the court. Key Judicial ReasoningThe court found that sanctions were warranted because counsel failed to take "basic, necessary precautions" (i.e., verifying citations) before filing. While the sanction was deemed "mild" due to the attorney's candor and unfamiliarity with AI (distinguishing it from Mata's bad faith finding), the court issued a strong warning that a defense based on ignorance "will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known". The case underscores the supervisory responsibilities of senior attorneys. |
||||||||
Park v. Kim | 2nd. Cir. CA (USA) | 30 January 2024 | Lawyer | ChatGPT | One fake case citation in appellate briefing | Referral to Grievance Panel + Order to Disclose Misconduct to Client. | — | |
AI UseCounsel admitted using ChatGPT to find supporting case law after failing to locate precedent manually. She cited a fictitious case (Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)) in the reply brief, never verifying its existence. Hallucination DetailsOnly one hallucinated case was cited in the reply brief: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014). When asked to produce the case, Counsel admitted it did not exist, blaming reliance on ChatGPT. Ruling/SanctionThe Court referred Counsel to the Second Circuit’s Grievance Panel for further investigation and possible discipline. Lee was ordered to furnish a copy of the decision (translated if necessary) to her client and to file certification of compliance. Key Judicial ReasoningThe Court emphasized that attorneys must personally verify the existence and accuracy of all authorities cited. Rule 11 requires a reasonable inquiry, and no technological novelty excuses failing to meet that standard. The Second Circuit cited Mata v. Avianca approvingly, confirming that citing fake cases amounts to abusing the adversarial system. |
||||||||
Matter of Samuel | NY Country Court (USA) | 11 January 2024 | Lawyer | Unidentified | Five flawed citations. | Striking of Filing + Sanctions Hearing Scheduled | — | |
AI UseOsborne’s attorney, under time pressure, submitted reply papers heavily relying on a website or tool that used generative AI. The submission included fabricated judicial authorities presented without independent verification. No admission by the lawyer was recorded, but the court independently verified the error. Hallucination DetailsOf the six cases cited in the October 11, 2023 reply, five were found to be either fictional or materially erroneous. A basic Lexis search would have revealed the fabrications instantly. The court drew explicit comparisons to the Mata v. Avianca fiasco. Ruling/SanctionThe court struck the offending reply papers from the record and ordered the attorney to appear for a sanctions hearing under New York’s Rule 130-1.1. Potential sanctions include financial penalties or other disciplinary measures. Key Judicial ReasoningThe court emphasized that while the use of AI tools is not forbidden per se, attorneys must personally verify all outputs. The violation was deemed "frivolous conduct" because the lawyer falsely certified the validity of the filing. The judge stressed the dangers to the judicial system from fictional citations: wasting time, misleading parties, degrading trust in courts, and harming the profession’s reputation. |
||||||||
Zachariah Crabill Disciplinary Case | Colorado SC (USA) | 21 November 2023 | Lawyer | ChatGPT | Fake/Incorrect Cases; Lied to Court | 90-day Actual Suspension (+ stayed term, probation) | — | |
AI UseAttorney Zachariah C. Crabill, relatively new to civil practice, used ChatGPT to research case law for a motion to set aside judgment, a task he was unfamiliar with and felt pressured to complete quickly. Hallucination DetailsCrabill included incorrect or fictitious case citations provided by ChatGPT in the motion without reading or verifying them. He realized the errors ("garbage" cases, per his texts) before the hearing but did not alert the court or withdraw the motion. Ruling/SanctionWhen questioned by the judge about inaccuracies at the hearing, Crabill falsely blamed a legal intern. He later filed an affidavit admitting his use of ChatGPT and his dishonesty, stating he "panicked" and sought to avoid embarrassment. He stipulated to violating professional duties of competence, diligence, and candor/truthfulness to the court. He received a 366-day suspension, with all but 90 days stayed upon successful completion of a two-year probationary period. This was noted as the first Colorado disciplinary action involving AI misuse. Key Judicial ReasoningThe disciplinary ruling focused on the combination of negligence (failure to verify, violating competence and diligence) and intentional misconduct (lying to the court, violating candor). While mitigating factors (personal challenges, lack of prior discipline) were noted in the stipulated agreement, the dishonesty significantly aggravated the offense. |
||||||||
Mescall v. Renaissance at Antiquity | Westner N.C. (USA) | 13 November 2023 | Pro Se Litigant | Unidentified | Unspecified concerns about AI-generated inaccuracies | No sanction; Warning and Leave to Amend Granted | — | |
AI UseDefendants alleged that portions of Plaintiff’s response to a motion to dismiss were AI-generated. Hallucination DetailsNo specific fabricated cases or fake quotations were identified. The concern was broader: incoherent and procedurally improper pleadings, compounded by apparent AI usage, which raised ethical red flags. Ruling/SanctionRather than imposing sanctions, the court granted the pro se plaintiff leave to amend the complaint. Plaintiff was warned to comply with procedural rules and to submit a coherent, consolidated amended complaint, or face dismissal. Key Judicial ReasoningThe judge stressed that AI use does not absolve pro se litigants of procedural compliance. Litigants must ensure pleadings are coherent, concise, and legally grounded, regardless of technological tools used. Courts cannot act as de facto advocates or reconstruct fragmented pleadings. |
||||||||
Morgan v. Community Against Violence | New Mexico (USA) | 23 October 2023 | Pro Se Litigant | Unidentified | Fake Case Citations | Partial Dismissal + Judicial Warning | — | |
AI UsePlaintiff, acting without counsel, submitted briefing that included multiple fabricated or erroneous judicial citations, likely generated by an AI tool used for research or drafting. While the tool itself is not named, the nature and clustering of errors mirror known AI output patterns. Hallucination DetailsCited cases included wholly nonexistent opinions (e.g., "Las Cruces Sun-News v. City of Las Cruces") and real case names with incorrect volume/reporting details (e.g., misattributed circuits or invented page numbers). The citations lacked verifiable authority and were flagged by the court as spurious upon review. Ruling/SanctionThe court dismissed several claims on substantive grounds but issued a sharp warning about the misuse of AI-generated legal citations. While no immediate sanctions were imposed, the judge explicitly referenced Mata v. Avianca and held this instance to be only the second federal case addressing AI hallucinations in pleadings. The plaintiff was cautioned that any recurrence would result in Rule 11 sanctions, including dismissal with prejudice. Key Judicial ReasoningThe opinion stressed that access to courts is not a license to submit fictitious legal materials. Rule 11(b) requires factual and legal support for all filings, and even pro se litigants must adhere to this baseline. The court emphasized judicial efficiency, fairness to the opposing party, and the reputational harm caused by false citations. The misuse of AI was implicitly treated as a form of recklessness or bad faith, not excused by technological ignorance |
||||||||
Source: Volokh | ||||||||
Thomas v. Pangburn | S.D. Ga. (USA) | 6 October 2023 | Pro Se Litigant | Unidentified | At least ten fabricated case citations | Dismissal of Case as Sanction for Bad Faith + Judicial Rebuke | — | |
AI UseJerry Thomas filed pro se pleadings citing at least ten fabricated cases. The citations appeared plausible but did not correspond to any real authorities. Despite opportunities to explain, Thomas gave vague excuses about "self-research" and "assumed reliability," without clarifying the sources - suggesting reliance on AI-generated content. Hallucination DetailsTen fake case citations systematically inserted across filings Fabricated authorities mimicked proper citation format but were unverifiable in any recognized database The pattern mirrored known AI hallucination behaviors: fabricated authorities presented with apparent legitimacy Ruling/SanctionThe Court dismissed the action with prejudice as a Rule 11 sanction. It emphasized that fake citations delay litigation, waste judicial resources, and erode public confidence. The Court explicitly invoked Mata v. Avianca for the broader dangers of AI hallucinations in litigation and found Thomas acted in bad faith by failing to properly explain the origin of the fabrications. Key Judicial ReasoningCiting fabricated cases (even if resulting from AI use or negligence) is sanctionable because it constitutes an improper purpose under Rule 11. Sanctions were deemed necessary to deter further abuses, with dismissal considered more appropriate than monetary penalties given the circumstances. |
||||||||
Ruggierlo et al. v. Lancaster | E.D. Mich. (USA) | 11 September 2023 | Pro Se Litigant | Unidentified | At least three fabricated case citations | No sanction; Formal Judicial Warning | — | |
AI UseLancaster, filing objections to a magistrate judge’s Report and Recommendation, cited several fabricated case authorities. The Court noted the possibility of reliance on a generative AI tool and explicitly warned Lancaster about future misconduct. Hallucination DetailsFabricated or mutant citations, including:
Court highlighted that the majority of the cited cases in Lancaster’s objections were fake. Ruling/SanctionNo immediate sanction imposed due to pro se status and lack of prior warnings. However, the Court issued a pointed warning that citing "made-up law" could lead to significant sanctions, either in that Court or any other court to which the case might be remanded. Key Judicial ReasoningThe Court emphasized that unverified, fabricated legal citations undermine the judicial process and waste both judicial and litigant resources. Even without clear evidence of malicious intent, negligence in checking citations is sanctionable. Rule 11 duties apply fully to pro se litigants. |
||||||||
Ex Parte Lee | Texas CA (USA) | 19 July 2023 | Lawyer | Unidentified | 3 fake case citations | No sanction; Judicial Warning; Affirmance of Trial Court Decision | — | |
AI UseThe Court noted that the appellant's argument section appeared to have been drafted by AI based on telltale errors (nonexistent cases, jump-cites into wrong jurisdictions, illogical structure). A recent Texas CLE on AI usage was cited by the Court to explain the pattern. Hallucination DetailsThree fake cases cited. Brief also contained no citations to the record and was devoid of clear argumentation on the presented issues. Ruling/SanctionThe Court declined to issue a show cause order or to refer counsel to the State Bar of Texas, despite noting similarities to Mata v. Avianca. However, it affirmed the trial court’s denial of habeas relief due to inadequate briefing, and explicitly warned about the dangers of using AI-generated content in legal submissions without human verification. Key Judicial ReasoningThe Court held that even if AI contributed to the preparation of filings, attorneys must ensure accuracy, logical structure, and compliance with citation rules. Failure to meet these standards precludes appellate review under Tex. R. App. P. 38.1(i). Courts are not obligated to "make an appellant’s arguments for him," especially where brief defects are gross. |
||||||||
Mata v. Avianca, Inc | S.D.N.Y. (USA) | 22 June 2023 | Lawyer | ChatGPT | 6+ Fake Cases, Quotes, Citations; Fake Opinions | Monetary Fine (Lawyers & Firm); Letters to Client/Judges | 5000 USD | |
AI Use Counsel from Levidow, Levidow & Oberman used ChatGPT for legal research to oppose a motion to dismiss a personal injury claim against Avianca airlines, citing difficulty accessing relevant federal precedent through their limited research subscription. Hallucination DetailsThe attorneys' submission included at least six completely non-existent judicial decisions, complete with fabricated quotes and internal citations. Examples cited by the court include Varghese v. China Southern Airlines Co., Ltd., Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Inc., Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines, Inc.. When challenged by opposing counsel and the court, the attorneys initially stood by the fake cases and even submitted purported copies of the opinions, which were also generated by ChatGPT and contained further bogus citations. Ruling/SanctionJudge P. Kevin Castel imposed a $5,000 monetary sanction jointly and severally on the two attorneys and their law firm. He also required them to send letters informing their client and each judge whose name was falsely used on the fabricated opinions about the situation. Key Judicial ReasoningJudge Castel found the attorneys acted in bad faith, emphasizing their "acts of conscious avoidance and false and misleading statements to the Court" after the issue was raised. The sanctions were imposed not merely for the initial error but for the failure in their gatekeeping roles and their decision to "double down" rather than promptly correcting the record. The opinion detailed the extensive harms caused by submitting fake opinions. This case is widely considered a landmark decision and is frequently cited in subsequent discussions and guidance. |
||||||||
Scott v. Federal National Mortgage Association | Maine County (USA) | 14 June 2023 | Pro Se Litigant | Unidentified | Several fabricated case citations and fake quotations | Dismissal of Complaint + Sanctions (Attorney's Fees and Costs) | — | |
AI UseMr. Scott, opposing a motion to dismiss, filed a brief containing multiple fabricated case citations with plausible formatting but nonexistent underlying cases. Court recognized the pattern as typical of AI hallucinations. Scott did not admit AI use, but the inference was clear. Hallucination DetailsSeveral case names, reporter citations, and quotations provided were fake; no match could be found in legal databases. Quotations attached to these cases were invented. Citations appeared superficially valid (correct format) but were unverifiable Ruling/SanctionComplaint dismissed in full Sanctions imposed: Scott ordered to pay defendant’s reasonable attorney’s fees, costs, and expenses associated with the motion to dismiss and motion for sanctions Court required affidavit from Fannie Mae detailing fees, after which Scott could contest reasonableness but not the sanction itself Key Judicial ReasoningThe Court emphasized that using AI tools does not relieve any litigant of their duty to verify legal authorities. Citing or quoting nonexistent cases is a violation of Maine Rule of Civil Procedure 11. Even pro se litigants cannot "blindly rely" on AI outputs and are expected to exercise reasonable diligence. The judgment was framed explicitly to deter future abuse of AI-generated filings. |