This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (673 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Willis v. U.S. Bank National Association as Trustee, Igloo Series Trust | N.D. Texas, Dallas Division (USA) | 28 April 2025 | Pro Se Litigant | Implied | Fabricated citation(s) | Warning | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Benjamin v. Costco Wholesale Corp | E.D.N.Y. (USA) | 24 April 2025 | Lawyer | ChatOn | Five fabricated case citations, and quotations | Monetary sanction; public reprimand; order to serve client with decision; no disciplinary referral due to candor and remediation | 1000 USD | — | |
AI UseCounsel used ChatOn to rewrite a reply brief with case law, under time pressure, without verifying the outputs. The five cases did not exist; citations were entirely fictional. Counsel later admitted this in a sworn declaration and at hearing, describing her actions as a lapse caused by workload and inexperience with AI. Hallucination DetailsFabricated cases included:
None of these cases matched any legal source. Counsel filed them as part of a sworn statement under penalty of perjury. Ruling/SanctionThe court imposed a $1,000 sanction payable to the Clerk; ordered the counsel to serve the order on her client and file proof of service. The court acknowledged her sincere remorse and remedial CLE activity, but emphasized the seriousness of submitting hallucinated cases under oath. Sanctions were tailored for deterrence, not punishment. Key Judicial ReasoningQuoting Park v. Kim and Mata v. Avianca, the court held that submitting legal claims based on nonexistent authorities without checking them constitutes subjective bad faith. Signing a sworn filing without knowledge of its truth is independently sanctionable. Time pressure is not a defense. Lawyers cannot outsource core duties to generative AI and disclaim responsibility for the results. |
|||||||||
| Nichols v. Walmart | S.D. Georgia (USA) | 23 April 2025 | Pro Se Litigant | Implied | Multiple fictitious legal citations | Case dismissed for lack of subject matter jurisdiction and as a Rule 11 sanction for bad-faith submission of fabricated legal authorities | — | — | |
AI UsePlaintiff submitted a motion to disqualify opposing counsel that cited multiple non-existent cases. She offered no clarification about how the citations were obtained or whether she had attempted to verify them. The court noted this failure and declined to excuse the misconduct, though it stopped short of attributing it directly to AI tools. Hallucination DetailsThe court reviewed Plaintiff’s motion and found that some of the cited cases did not exist. Despite being ordered to show cause, Plaintiff responded only with general statements about her good faith and complaints about perceived procedural unfairness, without addressing the origin or verification of the fake cases. Ruling/SanctionThe court dismissed the case for lack of subject matter jurisdiction and independently dismissed it as a sanction for bad-faith litigation under Rule 11. It found Plaintiff’s conduct—submitting fictitious legal authorities and refusing to take responsibility for them—warranted dismissal, even if monetary sanctions were not appropriate. The court cited Mata v. Avianca, Morgan v. Community Against Violence, and O’Brien v. Flick as relevant precedents affirming the sanctionability of hallucinated case law. Key Judicial ReasoningJudge Hall held that Plaintiff’s conduct went beyond excusable error. Her submission of fabricated cases, refusal to explain their origin, and attempts to shift blame to perceived procedural grievances demonstrated bad faith. The court concluded that dismissal—though duplicative of the jurisdictional ground—was warranted as a standalone sanction to deter future abuse by similarly situated litigants. |
|||||||||
| Brown v. Patel et al. | S.D. Texas (USA) | 22 April 2025 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(2)
|
Warning | — | — | |
|
Although no immediate sanctions were imposed, Magistrate Judge Ho explicitly warned Plaintiff that future misconduct of this nature may violate Rule 11 and lead to consequences. |
|||||||||
| Ferris v. Amazon.com Services | N.D. Mississippi (USA) | 16 April 2025 | Pro Se Litigant | ChatGPT | 7 fictitious cases | Plaintiff ordered to pay Defendant’s reasonable costs related to addressing the fabricated citations | — | — | |
AI UseMr. Ferris admitted at the April 8, 2025 hearing that he used ChatGPT to generate the legal content of his filings and even the statement he read aloud in court. The filings included at least seven entirely fictitious case citations. The court noted the imbalance: it takes a click to generate AI content but substantial time and labor for courts and opposing counsel to uncover the fabrications. Hallucination DetailsThe hallucinated cases included federal circuit and district court decisions, complete with plausible citations and jurisdictional diversity, crafted to lend credibility to Plaintiff’s intellectual property and employment-related claims. These false authorities were submitted both in the complaint and in opposition to Amazon’s motion to dismiss. Ruling/SanctionThe court found a Rule 11 violation and, while initially inclined to dismiss the case outright, chose instead to impose a compensatory monetary sanction. Amazon is entitled to submit a detailed affidavit of costs directly attributable to rebutting the false citations. The final monetary amount will be set in a subsequent order. Key Judicial ReasoningJudge Michael P. Mills condemned the misuse of generative AI as a serious threat to judicial integrity. Quoting Kafka (“The lie made into the rule of the world”), the court lamented the rise of “a post-truth world” and framed Ferris as an “avatar” of that dynamic. Nevertheless, it opted for the least severe sanction consistent with deterrence and fairness: compensatory costs under Rule 11. |
|||||||||
| Sims v. Souily-Lefave | D. Nevada (USA) | 15 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Warning | — | — | |
| Crystal Truong, et al. v. Flint Hills Resources, LLC, et al. | S.D. Texas (USA) | 14 April 2025 | Lawyer | ChatGPT | Fabricated citation(s), misrepresented precedents | Show cause order; CLE commitment | — | — | |
|
After explaining what happened (document) counsel opted to non-suit all remaining claims, which means that the court never ruled on the show cause proceedings. |
|||||||||
| Bevins v. Colgate-Palmolive Co. | E.D. Pa. (USA) | 10 April 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(2)
|
Striking of Counsel’s Appearance + Referral to Bar Authorities + Client Notification Order | — | — | |
AI UseCounsel filed opposition briefs citing two nonexistent cases. The court suspected generative AI use based on "hallucination" patterns but Counsel neither admitted nor explained the citations satisfactorily. Failure to comply with a standing AI order aggravated sanctions. Hallucination DetailsTwo fake cases cited. Citation numbers and Westlaw references pointed to irrelevant or unrelated cases. No affidavit or real case documents were produced when ordered. Ruling/SanctionCounsel's appearance was struck with prejudice. The Court ordered notification to the State Bar of Pennsylvania and the Eastern District Bar. Consel was required to inform his client, Bevins, of the sanctions and the need for new counsel if re-filing. |
|||||||||
| Bischoff v. South Carolina Department of Education | Admin Law Court, S.C. (USA) | 10 April 2025 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
|
The court held that: "It is likely that Appellant employed argument generated by an artificial intelligence (AI) program which contained the fictitious case citation and cautions Appellant that many harms flow from the use of non-existent case citations and fake legal authority generated by AI programs, including but not limited to the waste of judicial resources and time and waste of resources and time of the opposing party. Were courts to unknowingly rely upon fictitious citations, citizens and future litigants might question the validity of court decisions and the reputation of judges. If, alternatively, Appellant's use of a fictitious case was not the result of using an AI program, but was instead a conscious act of the Appellant, Appellant's action could be deemed a fraud on the Court. Appellant is hereby expressly warned that submission of fictitious case authorities may subject Appellant to sanctions under the S.C. Frivolous Proceedings Act, S.C. Code Ann. § 15-36-10(Supp. 2024)." |
|||||||||
| Daniel Jaiyong An v. Archblock, Inc. | Delaware Chancery (USA) | 3 April 2025 | Pro Se Litigant | Implied |
False Quotes
Case Law
(2)
Misrepresented
Case Law
(2)
|
Motion denied with prejudice; no immediate sanction imposed, but petitioner formally warned and subject to future certification and sanctions | — | — | |
AI UseThe petitioner submitted a motion to compel discovery that contained several fabricated or misleading citations. The court explicitly stated that the motion bore hallmarks of generative AI use and referenced ChatGPT’s known risk of “hallucinations.” Although the petitioner did not admit AI use, the court found the origin clear and required future filings to include a GenAI usage certification. Hallucination DetailsExamples included:
Court verified via Westlaw that some phrases returned only the petitioner’s motion as a result. Ruling/SanctionMotion to compel denied with prejudice. No immediate monetary sanction imposed, but petitioner was warned that further submission of fabricated authority may result in sanctions including monetary penalties or dismissal. Future filings must include a certification regarding the use of generative AI. Key Judicial ReasoningThe Vice Chancellor emphasized that GenAI can benefit courts and litigants, but careless use that results in fictitious legal authorities wastes resources and harms judicial integrity. |
|||||||||
| Dehghani v. Castro | New Mexico DC (USA) | 2 April 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(6)
False Quotes
Case Law
(1)
|
Monetary sanction; required CLE on legal ethics and AI; mandatory self-reporting to NM and TX state bars; report of subcontractor to NY state bar; required notification to LAWCLERK | 1500 USD | — | |
AI UseCounsel hired a freelance attorney through LAWCLERK to prepare a filing. He made minimal edits and admitted not verifying any of the case law before signing. The filing included multiple fabricated cases and misquoted others. The court concluded these were AI hallucinations, likely produced by ChatGPT or similar. Hallucination DetailsExamples of non-existent cases cited include: Moncada v. Ruiz, Vega-Mendoza v. Homeland Security, Morales v. ICE Field Office Director, Meza v. United States Attorney General, Hernandez v. Sessions, and Ramirez v. DHS. All were either entirely fictitious or misquoted real decisions. Ruling/SanctionThe Court sanctioned Counsel by:
Key Judicial ReasoningThe court emphasized that counsel’s failure to verify cited cases, coupled with blind reliance on subcontracted work, constituted a violation of Rule 11(b)(2). The court analogized to other AI-sanctions cases. While the fine was modest, the court imposed significant procedural obligations to ensure deterrence. |
|||||||||
| D'Angelo v. Vaught | Illinois (USA) | 2 April 2025 | Lawyer | Archie (Smokeball) | Fabricated citation | Monetary sanction | 2000 USD | — | |
| Boggess v. Chamness | E.D. Texas (USA) | 1 April 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Argument ignored | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Sanders v. United States | Fed. claims court (USA) | 31 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1),
Legal Norm
(1)
|
Warning | — | — | |
AI UseThe plaintiff did not admit to using AI, but the court inferred likely use due to the submission of fabricated citations matching the structure and behavior typical of generative AI hallucinations. The decision referenced public concerns about AI misuse and cited specific examples of federal cases where similar misconduct occurred. Hallucination DetailsPlaintiff cited:
Ruling/SanctionThe court granted the government’s motion to dismiss for lack of subject matter jurisdiction under Rule 12(b)(1). Although the court found a clear Rule 11 violation, it opted not to sanction the plaintiff, citing the evolving context of AI use and the absence of bad faith. A formal warning was issued, with notice that future hallucinated filings may trigger sanctions. Key Judicial ReasoningJudge Roumel noted that plaintiff’s attempt to rely on fictional case law was a misuse of judicial resources and a disservice to her own advocacy. The court cited multiple precedents addressing hallucinated citations and AI misuse, stating clearly that while leeway is granted to pro se litigants, the line is crossed when filings rely on fictitious law. |
|||||||||
| McKeown v. Paycom Payroll LLC | W.D. Oklahoma (USA) | 31 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Submission stricken out, and warning | — | — | |
AI UseAlthough AI was not named and Plaintiff denied intentional fabrication, the court considered the citation (Adamov, 779 F.3d 851, 860 (8th Cir. 2015)) to be plainly fictitious. It noted the possibility that Plaintiff used generative AI tools, given the fabricated citation's plausible-sounding structure and mismatch with existing precedent. Hallucination DetailsPlaintiff submitted fabricated legal authorities in at least two filings, despite being explicitly warned by the court after the first incident. The false case cited in her sur-reply could not be located in any legal database. When asked to produce it, she responded that she had likely “garbled” the citation but provided no plausible alternative or correction. Ruling/SanctionThe court declined to dismiss the action as a sanction, citing the limitations pro se litigants face in accessing reliable legal research tools. However, it granted the defendant’s motion to strike Plaintiff’s two unauthorized sur-replies and formally warned her that further violations of Rule 11 would lead to sanctions, including monetary penalties, filing restrictions, or dismissal. |
|||||||||
| Kruglyak v. Home Depot U.S.A., Inc. | W.D. Virginia (USA) | 25 March 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
No monetary sanctions; Warning | — | — | |
AI UseKruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources. Hallucination DetailsThe plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones. Ruling/SanctionMagistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur. Key Judicial ReasoningThe court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action. |
|||||||||
| Francois v. Medina | Supreme Court, NY (USA) | 24 March 2025 | Lawyer | Unidentified | Fabricated citations | Warning | — | — | |
| Buckner v. Hilton Global | W.D. Kentucky (USA) | 21 March 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(1)
|
Warning | — | — | |
|
In a subsequent Order, the court pointed out that "This Court's opinion pointing out Buckner's citation to nonexistent case law, along with its implications, is an issue for appeal and not a valid basis for recusal. " |
|||||||||
| Loyer v. Wayne County Michigan | E.D. Michigan (USA) | 21 March 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
Misrepresented
Exhibits or Submissions
(1)
|
Plaintiff's counsel ordered to attend an ethics seminar | — | — | |
|
Source: Jesse Schaefer
|
|||||||||
| Stevens v. BJC Health System | Missouri CA (USA) | 18 March 2025 | Pro Se Litigant | Implied | 6 fabricated citations | Warning | — | — | |
| Alkuda v. McDonald Hopkins Co., L.P.A. | N.D. Ohio (USA) | 18 March 2025 | Pro Se Litigant | Implied | Fake Citations | Warning | — | — | |
| Mark Lillard v. Offit Kurman, P.A. | SC Delaware (USA) | 12 March 2025 | Pro Se Litigant | Unidentified |
False Quotes
Case Law
(2)
Misrepresented
Case Law
(2)
|
AI-use certification required for future filings | — | — | |
| Arnaoudoff v. Tivity Health Incorporated | D. Arizona (USA) | 11 March 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(3)
Misrepresented
Case Law
(1)
|
Court ignored fake citations and granted motion to correct the record | — | — | |
| Sheets v. Presseller | M.D. Florida (USA) | 11 March 2025 | Pro Se Litigant | Implied | Allegations by the other party that brief was AI-generated | Warning | — | — | |
| 210S LLC v. Di Wu | Hawaii (USA) | 11 March 2025 | Pro Se Litigant | Implied | Fictitious citation and misrepresentation | Warning | — | — | |
| Nguyen v. Wheeler | E.D. Arkansas (USA) | 3 March 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
|
Monetary sanction | 1000 USD | — | |
AI UseNguyen did not confirm which AI tool was used but acknowledged that AI “may have contributed.” The court inferred the use of generative AI from the pattern of hallucinated citations and accepted Nguyen’s candid acknowledgment of error, though this did not excuse the Rule 11 violation. Hallucination DetailsFictitious citations included:
None of these cases existed in Westlaw or Lexis, and the quotes attributed to them were fabricated. Outcome / SanctionThe court imposed a $1,000 monetary sanction on Counsel for citing non-existent case law in violation of Rule 11(b). It found her conduct unjustified, despite her apology and explanation that AI may have been involved. The court emphasized that citing fake legal authorities is an abuse of the adversary system and warrants sanctions. |
|||||||||
| Bunce v. Visual Technology Innovations, Inc. | E.D. Pa. (USA) | 27 February 2025 | Lawyer | ChatGPT |
Fabricated
Case Law
(2)
Misrepresented
Case Law
(1)
Outdated Advice
Overturned Case Law
(2)
|
Monetary Sanction + Mandatory CLE on AI and Legal Ethics | 2500 USD | — | |
AI UseCounsel admitted using ChatGPT to draft two motions (Motion to Withdraw and Motion for Leave to Appeal), without verifying the cases or researching the AI tool’s reliability. Hallucination Details2 Fake cases:
Misused cases:
Ruling/SanctionThe Court sanctioned Counsel $2,500 payable to the court and ordered him to complete at least one hour of CLE on AI and legal ethics. The opinion emphasized that deterrence applied both specifically to Counsel and generally to the profession. Key Judicial ReasoningRule 11(b)(2) mandates reasonable inquiry into all legal contentions. No AI tool displaces the attorney’s personal duty. Novelty of AI tools is not a defense. |
|||||||||
| Merz v. Kalama | W.D. Washington (USA) | 25 February 2025 | Pro Se Litigant | Unidentified |
Misrepresented
Legal Norm
(2)
|
— | — | ||
| Wadsworth v. Walmart (Morgan & Morgan) | Wyoming (USA) | 24 February 2025 | Lawyer | Internal tool (ChatGPT) |
Fabricated
Case Law
(8)
|
$3k Fine + Pro Hac Vice Revoked (Drafter); $1k Fine each (Signers); Remedial actions noted. | 5000 USD | — | |
AI UseCounsel from Morgan & Morgan used the firm's internal AI platform (MX2.law, reportedly using ChatGPT) to add case law support to draft motions in limine in a product liability case concerning a hoverboard fire. This was reportedly his first time using AI for this purpose. Hallucination DetailsEight out of nine case citations in the filed motions were non-existent or led to differently named cases. Another cited case number was real but belonged to a different case with a different judge. The legal standard description was also deemed "peculiar". Ruling/SanctionAfter defense counsel raised issues, the Judge issued an order to show cause. The plaintiffs' attorneys admitted the error, withdrew the motions, apologized, paid opposing counsel's fees related to the motions, and reported implementing new internal firm policies and training on AI use. Judge Rankin found Rule 11 violations. Sanctions imposed were: $3,000 fine on the drafter and revocation of his pro hac vice admission; $1,000 fine each on the signing attorneys for failing their duty of reasonable inquiry before signing. Key Judicial ReasoningThe court acknowledged the attorneys' remedial steps and honesty but emphasized the non-delegable duty under Rule 11 to make a reasonable inquiry into the law before signing any filing. The court stressed that while AI can be a tool, attorneys remain responsible for verifying its output. The judge noted this was the "latest reminder to not blindly rely on AI platforms' citations". |
|||||||||
| Saxena v. Martinez-Hernandez et al. | D. Nev. (USA) | 18 February 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
False Quotes
Case Law
(1)
|
Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor | — | — | |
AI UseThe plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them. Hallucination Details
The court found no plausible explanation for these citations other than AI generation or outright fabrication. Ruling/SanctionThe court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith. Key Judicial ReasoningCiting Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief". |
|||||||||
| Gonzalez v. Texas Taxpayers and Research Association | W.D. Texas (USA) | 29 January 2025 | Lawyer | Lexis Nexis's AI |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1)
|
Plaintiff's response was stricken and monetary sanctions were imposed. | 3961 USD | — | |
|
In the case of Gonzalez v. Texas Taxpayers and Research Association, the court found that Plaintiff's counsel, John L. Pittman III, included fabricated citations, miscited cases, and misrepresented legal propositions in his response to a motion to dismiss. Pittman initially denied using AI but later admitted to using Lexis Nexis's AI citation generator. The court granted the defendant's motion to strike the plaintiff's response and imposed monetary sanctions on Pittman, requiring him to pay $3,852.50 in attorney's fees and $108.54 in costs to the defendant. The court deemed this an appropriate exercise of its inherent power due to the abundance of technical and substantive errors in the brief, which inhibited the defendant's ability to efficiently respond. |
|||||||||
| Fora Financial Asset Securitization v. Teona Ostrov Public Relations | NY SC (USA) | 24 January 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
No sanction imposed; court struck the offending citations and warned that repeated occurrences may result in sanctions | — | — | |
AI UseThe court noted “problems with several citations leading to different or non-existent cases and a quotation that did not appear in any cases cited” in defendants’ reply papers. While the court did not identify AI explicitly, it flagged the issue and indicated that repeated infractions could lead to sanctions. Ruling/SanctionNo immediate sanction. The court granted plaintiff’s motion in part, striking thirteen of eighteen affirmative defenses. It emphasized that if citation issues persist, sanctions will follow. |
|||||||||
| Strike 3 Holdings LLC v. Doe | C.D. California (USA) | 22 January 2025 | Lawyer | Ulokued |
Fabricated
Case Law
(3)
|
— | — | ||
Key Judicial ReasoningMagistrate Judge Sheri Pym found the motion legally deficient on multiple grounds. In addition, she emphasized that counsel must not rely on fake or unverified authority. She cited Mata, Park, Gauthier, and others as cautionary examples of courts imposing sanctions for AI-generated hallucinations. The court reaffirmed that the use of AI does not lessen the duty to verify the existence and relevance of cited law. |
|||||||||
| Arajuo v. Wedelstadt et al | E.D. Wisconsin (USA) | 22 January 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
|
Warning | — | — | |
AI UseCounsel admitted using a “new legal research medium”, appears to be a generative AI system or platform capable of generating fictitious case law. Counsel did not deny using AI, but claimed the system may have been corrupted or unreliable. The amended filing removed the false authorities. Hallucination DetailsThe court did not identify the specific fake cases but confirmed that “citations to non-existent cases” were included in Defendants’ original brief. Counsel’s subsequent filing corrected the record but did not explain how the citations passed into the brief in the first place. Ruling/SanctionJudge William Griesbach denied the motion for summary judgment on the merits, but addressed the citation misconduct separately. He cited Rule 11 and Park v. Kim (91 F.4th 610, 615 (2d Cir. 2024)) to underline the duty to verify. No formal sanctions were imposed, but counsel was explicitly warned that further use of non-existent authorities would not be tolerated. Key Judicial ReasoningThe court emphasized that even if the submission of false citations was not malicious, it was still a serious breach of Rule 11 obligations. Legal contentions must be “warranted by existing law,” and attorneys are expected to read and confirm cited cases. The failure to do so, even if caused by AI use, is unacceptable. The court accepted counsel’s corrective effort but insisted that future violations would be sanctionable. |
|||||||||
| United States v. Hayes | E.D. Cal. (USA) | 17 January 2025 | Federal Defender | Unidentified | One fake case citation with fabricated quotation | Formal Sanction Imposed + Written Reprimand | — | — | |
AI UseDefense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible. Hallucination DetailsFrancisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere. Ruling/SanctionThe Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted. Key Judicial ReasoningThe court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record. |
|||||||||
|
Source: Volokh
|
|||||||||
| Strong v. Rushmore Loan Management Services | D. Nebraska (USA) | 15 January 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions | — | — | |
| Kohls v. Ellison | Minnesota (USA) | 10 January 2025 | Expert | GPT-4o | Fake Academic Citations | Expert Declaration Excluded | — | — | |
AI UseProfessor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections. Hallucination DetailsThe declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations. Ruling/SanctionProfessor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable. Key Judicial ReasoningThe court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted. |
|||||||||
|
Source: Volokh
|
|||||||||
| O’Brien v. Flick and Chamberlain | S.D. Florida (USA) | 10 January 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations | — | — | |
AI UseAlthough O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.” Ruling/SanctionThe court dismissed the case with prejudice on dual grounds:
Key Judicial ReasoningJudge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers. |
|||||||||
| Al-Hamim v. Star Hearthstone | Colorado (USA) | 26 December 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(8)
|
No Sanction (due to pro se, contrition, etc.); Warning of future sanctions. | — | — | |
AI UseAlim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court. Hallucination DetailsThe appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause. Ruling/SanctionAl-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions. Key Judicial ReasoningFactors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status. |
|||||||||
| Letts v. Avidien Technologies | E.D. N. Carolina (USA) | 16 December 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(2)
|
Warning | — | — | |
| Mojtabavi v. Blinken | C.D. California (USA) | 12 December 2024 | Pro Se Litigant | Unidentified | Multiple fake cases | Case dismissed with prejudice | — | — | |
| Carlos E. Gutierrez v. In Re Noemi D. Gutierrez | Fl. 3rd District CA (USA) | 4 December 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
|
Appeals dismissed as sanction; Appellant barred from future pro se filings in related probate matters without attorney signature | — | — | |
AI UseThe court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.” Hallucination DetailsThe court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated. Ruling/SanctionDismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings Key Judicial ReasoningThe Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations. |
|||||||||
| Rubio v. District of Columbia DHS | D.C. DC (USA) | 3 December 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1)
|
Complaint dismissed with prejudice; no Rule 11 sanctions imposed, but clear judicial warning on AI misuse and citation verification duties | — | — | |
AI UsePlaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).” Hallucination DetailsThe court could not locate the following cited cases:
These were used to allege a pattern of constitutional violations by the District but were found to be fabricated. Ruling/SanctionThe court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse. Key Judicial ReasoningThe Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice. |
|||||||||
| Gauthier v. Goodyear Tire & Rubber Co. | E.D. Tex. (USA) | 25 November 2024 | Lawyer | Claude |
Fabricated
Case Law
(2)
False Quotes
Case Law
(7)
|
Monetary fine + Mandatory AI-related CLE Course + Disclosure to Client | 2000 USD | — | |
AI UseMonk admitted using the Claude AI tool to draft a summary judgment opposition without adequately verifying the case citations or quotations. He later claimed to have attempted post-hoc verification through Lexis AI but did not correct the errors until after a judicial show cause order. Hallucination DetailsCited two completely nonexistent cases. Also fabricated quotations attributed to real cases, including Morales v. SimuFlite, White v. FCI USA, Burton v. Freescale, among others. Several "quotes" did not appear anywhere in the cited opinions. Ruling/SanctionThe court imposed a $2,000 fine, ordered Monk to complete at least one hour of CLE on generative AI in legal practice, and mandated formal disclosure of the sanctions order to his client. It also permitted amendment of the defective filing but warned of the severity of the misconduct. Key Judicial ReasoningThe court emphasized that attorneys remain personally responsible for the verification of all filings under Rule 11, regardless of technology used. Use of AI does not dilute the duty of candor. Continued silence and failure to rectify errors after opposing counsel flagged them exacerbated the misconduct. |
|||||||||
| Leslie v. IQ Data International | N.D. Georgia (USA) | 24 November 2024 | Pro Se Litigant | Implied | Citation to nonexistent authorities | Background action dismissed with prejudice, but no monetary sanction | — | — | |
| Berry v. Stewart | D. Kansas (USA) | 14 November 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(1),
Exhibits or Submissions
(1)
|
At hearing, Counsel pledged to reimburse other side and his client | — | — | |
|
In the November 2024 Show Cause Order, Judge Robinson noted that: "First, the briefing does not cite the forum-selection clause from the contract between the parties; instead, it cites and quotes a forum-selection clause that appears nowhere in the papers submitted by the parties. Second, Defendant’s reply brief includes a citation, Hogan v. Allstate Insurance Co., No. 19-CV-00262-JPM, 2020 WL 1882334 (D. Kan. Apr. 15, 2020), in which the court purportedly “transferred a case to the Southern District of Texas because the majority of the witnesses were located in Texas. The court found that the burden on the witnesses outweighed the convenience of litigating the case in Kansas.” As far as the Court can tell, this case does not exist. The Westlaw database number pulls up no case; the Court has found no case in CM/ECF between the parties “Hogan” and “Allstate Insurance Co.” Moreover, docket numbers in this district have at least four digits—not three—after the case-type designation, and there is no judge in this district with the initials “JPM.”" During the show cause hearing (Transcript), Counsel apologised and pledged to reimburse the other side's costs, as well as his client's. |
|||||||||
| Vargas v. Salazar | S.D. Texas (USA) | 1 November 2024 | Pro Se Litigant | Implied | Fake citations | Plaintiff ordered to refile submissions without fake citations | — | — | |
| Churchill Funding v. 732 Indiana | SC Cal (USA) | 31 October 2024 | Lawyer | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1),
Legal Norm
(1)
|
Order to show cause | — | — | |
|
Source: Volokh
|
|||||||||
| Mortazavi v. Booz Allen Hamilton, Inc. | C.D. Cal. (USA) | 30 October 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
False Quotes
Exhibits or Submissions
(1)
|
$2,500 Monetary Sanction + Mandatory Disclosure to California State Bar | — | — | |
AI UsePlaintiff’s counsel admitted using generative AI to draft a motion to remand without independently verifying the legal citations or the factual accuracy of quoted complaint allegations. Hallucination DetailsCited a fabricated case (details of the specific case name not listed in the ruling). Included fabricated quotations from the complaint, suggesting nonexistent factual allegations. Ruling/SanctionThe Court imposed a $2,500 sanction payable by December 30, 2024. Counsel was also required to notify the California State Bar of the sanction and file proof of notification and payment. The Court recognized mitigating factors (health issues, post-hoc corrective measures) but stressed the seriousness of the violations. Key Judicial ReasoningRule 11 requires attorneys to conduct a reasonable inquiry into both facts and law. Use of AI does not diminish this duty. Subjective good faith is irrelevant: violations occur even without intent to deceive. AI-generated filings must be reviewed with the same rigor as traditional submissions. |
|||||||||
| Thomas v. Commissioner of Internal Revenue | United States Tax Court (USA) | 23 October 2024 | Lawyer, Paralegal | Implied |
Misrepresented
Case Law
(3)
|
Pretrial Memorandum stricken | — | — | |
|
The lawyer for the petitioner admitted to not reviewing the memorandum, which was prepared by a paralegal. The court deemed the Pretrial Memorandum stricken but did not impose a monetary penalty, considering the economic situation of the petitioner and the lawyer's service to a client who might otherwise be unrepresented. It was also pertinent that the law being stated was accurate (even if the citations were wrong). |
|||||||||