This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations. It does not track the (necessarily wider) universe of all fake citations in court filings.
While seeking to be exhaustive (39 cases identified so far), it is a work in progress and will expand as new examples emerge.
If you know of a case that should be included, feel free to
contact me.
Case | Court / Jurisdiction | Date | Party Using AI | AI Tool | Nature of Hallucination | Outcome / Sanction | Key Judicial Principle / Warning |
Details |
---|---|---|---|---|---|---|---|---|
Coomer v. My Pillow, Inc. | D. Colorado (USA) | 23 April 2025 | Lawyer | Unidentified | Nearly thirty defective citations | Order to Show Cause re Sanctions + Potential Referral for Professional Discipline | Use of AI does not excuse failure to verify; attorneys must personally check all citations under Rule 11 and professional conduct rules | |
AI UseLawyer admitted at a hearing that he used a generative AI tool to draft Defendants’ opposition to a motion in limine, without performing a manual cite-check. This only came to light after repeated questioning by the court. Hallucination DetailsThe Opposition included nearly thirty major defects:
Ruling/SanctionThe court issued a show cause order requiring defendants’ counsel to explain why sanctions and disciplinary referrals should not be imposed. The court is considering monetary sanctions, personal discipline for the counsel involved, and mandated disclosure of misconduct to defendant Michael Lindell. Key Judicial ReasoningThe judge stressed that generative AI is no substitute for professional competence. Rule 11 requires a reasonable inquiry, and lawyers remain personally responsible for the contents of their filings. The court treated Kachouroff’s admissions and excuses with marked skepticism, indicating that misconduct was not merely negligent but bordered on reckless or deliberate disregard for ethical duties. |
||||||||
Bevins v. Colgate-Palmolive Co. | E.D. Pa. (USA) | 10 April 2025 | Lawyer | Unidentified | 2 fake case citations and misstatements | Striking of Counsel’s Appearance + Referral to Bar Authorities + Client Notification Order | Attorneys must personally verify legal citations; use of AI without verification violates Rule 11 and local AI rules. | |
AI UseCounsel filed opposition briefs citing two nonexistent cases. The court suspected generative AI use based on "hallucination" patterns but Counsel neither admitted nor explained the citations satisfactorily. Failure to comply with a standing AI order aggravated sanctions. Hallucination DetailsTwo fake cases cited. Citation numbers and Westlaw references pointed to irrelevant or unrelated cases. No affidavit or real case documents were produced when ordered. Ruling/SanctionCounsel's appearance was struck with prejudice. The Court ordered notification to the State Bar of Pennsylvania and the Eastern District Bar. Consel was required to inform his client, Bevins, of the sanctions and the need for new counsel if re-filing. Key Judicial ReasoningThe judge emphasized that citing nonexistent cases—even inadvertently—is a violation of Rule 11(b)(2), constituting at least negligence. Compliance with the Court’s AI Standing Order was mandatory. Self-certification obligations under Federal and Local Rules remain fully in force despite technological assistance. |
||||||||
A v. B | Florence (Italy) | 13 March 2025 | Lawyer | ChatGPT | Fabricated case law citations | No financial sanction; Formal Judicial Reprimand; Findings of procedural misuse | Reliance on ChatGPT-generated fake legal precedents, even if inadvertent, constitutes a serious procedural abuse and may support adverse inferences in judicial decision-making | |
AI UseThe respondent retailer's defense cited Italian Supreme Court judgments that did not exist, claiming support for their arguments regarding lack of subjective bad faith. During subsequent hearings, it was admitted that these fake citations were generated by ChatGPT during internal research by an assistant, and the lead lawyer had failed to independently verify them. Hallucination DetailsCited fabricated cassation rulings allegedly supporting subjective good faith defenses. No such rulings could be found in official databases; court confirmed their nonexistence. Hallucinated decisions related to counterfeit goods sales defenses Ruling/SanctionThe court declined to impose a financial sanction under Article 96 Italian Code of Civil Procedure but issued a formal rebuke. It refused the defending party's requests for costs and treated the fabricated citations as weakening the credibility of the defense. Court emphasized that using unverifiable AI outputs to support legal arguments is a procedural violation undermining the adversarial system. Key Judicial ReasoningThe Tribunal held that reliance on hallucinated case law undermines judicial process integrity and cannot be excused by ignorance or delegation to assistants. While no malice was found, gross negligence in verifying legal claims was established. Judicial reliance on trustworthy authorities is non-negotiable. The court noted that AI hallucinations are an increasingly recognized threat, drawing implicit parallel to international cases like Mata v. Avianca. |
||||||||
Reddan & An Bord Pleanála v. Trustees of Nenagh Golf Club | Ireland | 13 March 2025 | Pro Se Litigant | Unidentified | Pseudolegal and irrelevant submissions | Application for Judicial Review Denied; Express Judicial Rebuke for Misuse of AI | Use of AI to generate incoherent or spurious legal grounds is an abuse of process; applicants must personally ensure accuracy and relevance | |
AI UseJustice Nolan suspected that Reddan's submissions, especially references to "subornation to perjury" and Constitutional Article 40 rights, were AI-generated, exhibiting typical hallucination patterns (pseudo-legal concepts, inappropriate cut-and-paste fragments). Reddan did not admit using AI but relied on internet-sourced legal arguments that closely resembled LLM-style outputs. Hallucination DetailsInappropriate invocation of "subornation to perjury," a term foreign to Irish law. Constitutional and criminal law citations (Article 40, Non-Fatal Offences Against the Person Act) irrelevant to judicial review context. Assertions framed in hyperbolic, sensationalist terms without factual or legal basis. General incoherence of pleadings, consistent with AI-generated pseudo-legal text Ruling/SanctionThe High Court refused leave to apply for judicial review on all nine grounds. While no formal financial sanction was imposed, Justice Nolan issued a sharp rebuke, highlighting the improper use of AI and warning against making scurrilous, unverified allegations in legal pleadings. The Court stressed that misuse of AI-generated material could itself amount to an abuse of the judicial process. Key Judicial ReasoningThe Court held that AI tools do not excuse litigants from ensuring precision, coherence, and factual basis in pleadings. It emphasized that judicial review demands rigorous pleading standards, and the insertion of AI-fabricated concepts or incoherent arguments amounts to a violation of procedural rules. The ruling underlined the broader systemic risks posed by AI misuse in legal filings. |
||||||||
Bunce v. Visual Technology Innovations, Inc. | E.D. Pa. (USA) | 27 February 2025 | Lawyer | ChatGPT | 2 fake case citations + citation of vacated and inapposite cases. | $2,500 Monetary Sanction + Mandatory CLE on AI and Legal Ethics | Use of AI does not excuse failure to verify; lawyer remains solely responsible under Rule 11. | |
AI UseCounsel admitted using ChatGPT to draft two motions (Motion to Withdraw and Motion for Leave to Appeal), without verifying the cases or researching the AI tool’s reliability. Hallucination Details2 Fake cases:
Misused cases:
Ruling/SanctionThe Court sanctioned Counsel $2,500 payable to the court and ordered him to complete at least one hour of CLE on AI and legal ethics. The opinion emphasized that deterrence applied both specifically to Counsel and generally to the profession. Key Judicial ReasoningRule 11(b)(2) mandates reasonable inquiry into all legal contentions. No AI tool displaces the attorney’s personal duty. Ignorance of AI’s unreliability is not a defense. The Court cited Mata v. Avianca and Gauthier v. Goodyear to emphasize that sanctions for AI hallucinations are now a well-established judicial response. |
||||||||
Wadsworth v. Walmart (Morgan & Morgan) | Wyoming (USA) | 24 February 2025 | Lawyer | Internal tool (ChatGPT) | 8 of 9 Fake/Flawed Cases | $3k Fine + Pro Hac Vice Revoked (Drafter); $1k Fine each (Signers); Remedial actions noted. | Duty of reasonable inquiry applies to AI output; Signing attorneys responsible for review; Remedial steps mitigate but don't negate violation. | |
AI UseAttorney Rudwin Ayala of Morgan & Morgan used the firm's internal AI platform (MX2.law, reportedly using ChatGPT) to add case law support to draft motions in limine in a product liability case concerning a hoverboard fire. This was reportedly his first time using AI for this purpose. Hallucination DetailsEight out of nine case citations in the filed motions were non-existent or led to differently named cases. Another cited case number was real but belonged to a different case with a different judge. The legal standard description was also deemed "peculiar". Ruling/SanctionAfter defense counsel raised issues, Judge Kelly H. Rankin issued an order to show cause. The plaintiffs' attorneys (Ayala, T. Michael Morgan, Taly Goody) admitted the error, withdrew the motions, apologized, paid opposing counsel's fees related to the motions, and reported implementing new internal firm policies and training on AI use. Judge Rankin found Rule 11 violations. Sanctions imposed were: $3,000 fine on Ayala (the drafter) and revocation of his pro hac vice admission; $1,000 fine each on T. Michael Morgan and Taly Goody (the signing attorneys) for failing their duty of reasonable inquiry before signing. Key Judicial ReasoningThe court acknowledged the attorneys' remedial steps and honesty but emphasized the non-delegable duty under Rule 11 to make a reasonable inquiry into the law before signing any filing. The court stressed that while AI can be a tool, attorneys remain responsible for verifying its output. The judge noted this was the "latest reminder to not blindly rely on AI platforms' citations". |
||||||||
Anonymous v. Sharia Court of Appeals | Israel | 23 February 2025 | Lawyer | Unidentified | 36+ Flawed/Fake Citations | Petition Dismissed Outright; Warning re: Costs/Discipline. | Submission of falsified claims warrants dismissal; AI expert opinions generally unreliable; Potential for personal costs/discipline. | |
AI UseThe petitioner’s counsel used an AI-based platform to draft the legal petition. Hallucination DetailsThe petition cited 36 fabricated or misquoted Israeli Supreme Court rulings. Five references were entirely fictional, 14 had mismatched case details, and 24 included invented quotes. Upon judicial inquiry, counsel admitted reliance on an unnamed website recommended by colleagues, without verifying the information's authenticity. The Court concluded that the errors were likely the product of generative AI. Ruling/SanctionThe High Court of Justice dismissed the petition on the merits, finding no grounds for intervention in the Sharia courts’ decisions. Despite the misconduct, no personal sanctions or fines were imposed on counsel, citing it as the first such incident to reach the High Court and adopting a lenient stance “far beyond the letter of the law.” However, the judgment was explicitly referred to the Court Administrator for system-wide attention. Key Judicial ReasoningThe Court issued a stern warning about the ethical duties of lawyers using AI tools, underscoring that professional obligations of diligence, verification, and truthfulness remain intact regardless of technological convenience. The Court suggested that in future cases, personal sanctions on attorneys might be appropriate to protect judicial integrity. |
||||||||
Mid Cent. Operating Eng'rs Health v. Hoosiervac | S.D. Ind. (USA) | 21 February 2025 | Lawyer | Unidentified | 3 fake case citations | $15,000 Sanction + Referral to Chief Judge for Further Discipline + Client Notification Order. | AI use does not excuse basic verification duties; good faith is no defense against Rule 11 violations. | |
AI UseCounsel admitted at a show cause hearing that he used generative AI tools to draft multiple briefs and did not verify the citations provided by the AI, mistakenly trusting their apparent credibility without checking. Hallucination DetailsThree distinct fake cases across filings. Each was cited in a separate brief, with no attempt at Shepardizing or KeyCiting. Ruling/SanctionThe Court recommended a $15,000 sanction ($5,000 per violation), with the matter referred to the Chief Judge for potential additional professional discipline. Counsel was also ordered to notify Hoosiervac LLC’s CEO of the misconduct and file a certification of compliance. Key Judicial ReasoningThe judge stressed that reliance on AI outputs without verification is a violation of Rule 11. Good faith ignorance about AI hallucination capabilities is irrelevant. The decision emphasized that generative AI can assist research but cannot replace professional obligations. The judge invoked multiple authorities on sanctions for failure to verify case law and analogized using AI improperly to wielding dangerous tools without caution. |
||||||||
Aleto Beheer BV v. Venlo Municipality | Dutch Council of State (Netherlands) | 29 January 2025 | Lawyer | ChatGPT | Use of ChatGPT to produce generalized market claims without underlying source verification. | Arguments rejected; No formal sanction but judicial disqualification of the AI-sourced material. | AI outputs without disclosed input prompts, verifiable sources, or professional validation cannot support claims in judicial proceedings | |
AI UseAleto submitted, shortly before hearing, a supplementary document claiming that environmental zoning category differences significantly impact real estate values in North Limburg. The document’s information was obtained via ChatGPT. The prompt/question to ChatGPT was not submitted, nor were sources or independent verifications provided. Hallucination DetailsChatGPT-generated generalizations about the impact of environmental zoning (milieucategorieën) on property value. No formal references or empirical data supporting the AI output. ChatGPT itself warned that proper valuation requires consulting a human real estate expert. Ruling/SanctionThe Court refused to consider the ChatGPT-based information as valid evidence. It emphasized that real estate valuation disputes involve complex expertise that cannot be substituted by AI outputs. Aleto’s appeal was dismissed, and the Council expressly reaffirmed that without a proper independent expert report, ChatGPT statements are legally worthless. Key Judicial ReasoningJudicial decision-making requires rigorously tested, verifiable inputs. AI outputs that do not disclose the input question or underlying data, and that disclaim reliability themselves, cannot satisfy this standard. Especially in technical fields like property tax and environmental valuation, human expert reports, not AI summaries, are mandatory. |
||||||||
United States v. Hayes | E.D. Cal. (USA) | 17 January 2025 | Federal Defender | Unidentified | One fake case citation with fabricated quotation | Formal Sanction Imposed + Written Reprimand | Hallucinated cases and quotes is serious misconduct; defense counsel’s denials deemed non-credible | |
AI UseDefense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible. Hallucination DetailsFrancisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere. Ruling/SanctionThe Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted. Key Judicial ReasoningThe court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record. |
||||||||
Kohls v. Ellison | Minnesota (USA) | 10 January 2025 | Misinformation Expert | GPT-4o | Fake Academic Citations | Expert Declaration Excluded | AI errors shatter expert credibility; Duty of diligence extends to expert filings; Rule 11 may require asking experts re: AI use/verification. | |
AI UseProfessor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections. Hallucination DetailsThe declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations. Ruling/SanctionProfessor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable. Key Judicial ReasoningThe court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted. |
||||||||
Al-Hamim v. Star Hearthstone | Colorado (USA) | 26 December 2024 | Pro Se Litigant | Unidentified | 8 Fake Cases | No Sanction (due to pro se, contrition, etc.); Warning of future sanctions. | Duty of diligence applies to pro se litigants. | |
AI UseAlim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court. Hallucination DetailsThe appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause. Ruling/SanctionAl-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions. Key Judicial ReasoningFactors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status. |
||||||||
Gauthier v. Goodyear Tire & Rubber Co. | E.D. Tex. (USA) | 25 November 2024 | Lawyer | Claude | Two nonexistent cases + multiple fabricated quotations | $2,000 fine + Mandatory AI-related CLE Course + Disclosure to Client | Failure to verify AI outputs violates Rule 11 and local duty of candor; even post-warning inaction aggravates sanctions | |
AI UseMonk admitted using the Claude AI tool to draft a summary judgment opposition without adequately verifying the case citations or quotations. He later claimed to have attempted post-hoc verification through Lexis AI but did not correct the errors until after a judicial show cause order. Hallucination DetailsCited two completely nonexistent cases. Also fabricated quotations attributed to real cases, including Morales v. SimuFlite, White v. FCI USA, Burton v. Freescale, among others. Several "quotes" did not appear anywhere in the cited opinions. Ruling/SanctionThe court imposed a $2,000 fine, ordered Monk to complete at least one hour of CLE on generative AI in legal practice, and mandated formal disclosure of the sanctions order to his client. It also permitted amendment of the defective filing but warned of the severity of the misconduct. Key Judicial ReasoningThe court emphasized that attorneys remain personally responsible for the verification of all filings under Rule 11, regardless of technology used. Use of AI does not dilute the duty of candor. Continued silence and failure to rectify errors after opposing counsel flagged them exacerbated the misconduct. |
||||||||
Plaintiff v. Minister of Asylum | The Hague District Court (Netherlands) | 6 November 2024 | Lawyer | ChatGPT | Use of ChatGPT output as authority for factual assertions about surveillance practices. | Argument discounted; No formal sanction but judicial criticism recorded. | AI outputs, without disclosure of prompts or source validation, are legally worthless as evidence. | |
AI UseDuring the hearing, the plaintiff’s representative cited an answer generated by ChatGPT to argue that the Moroccan authorities systematically monitor political dissidents abroad, implying a risk of persecution on return. However, the representative failed to provide the actual question, the ChatGPT output, or any independent corroboration. Hallucination DetailsChatGPT’s unverified assertion was used to claim Moroccan authorities tracked Hirak activists abroad, without any independent evidence. The Court noted that ChatGPT provided no source references and that the question/answer pair was never filed into the case record Ruling/SanctionThe Court held the ChatGPT output legally irrelevant and gave it no probative value. While it did not impose sanctions on the plaintiff’s counsel, it criticized the reliance on unverifiable AI content in judicial proceedings. The plaintiff's asylum appeal was ultimately dismissed. Key Judicial ReasoningThe Court emphasized that judicial decisions must be based on verifiable evidence. AI-generated content without transparent sourcing or record authentication fails even minimal evidentiary standards. Citing such outputs does not meet the burden of proof for substantiating claims of future persecution. |
||||||||
Mortazavi v. Booz Allen Hamilton, Inc. | C.D. Cal. (USA) | 30 October 2024 | Lawyer | Unidentified | 1 fake case + fabricated factual allegations. | $2,500 Monetary Sanction + Mandatory Disclosure to California State Bar | Lawyers are strictly responsible for verifying AI outputs; negligence, not just bad faith, triggers Rule 11 liability. | |
AI UsePlaintiff’s counsel admitted using generative AI to draft a motion to remand without independently verifying the legal citations or the factual accuracy of quoted complaint allegations. Hallucination DetailsCited a fabricated case (details of the specific case name not listed in the ruling). Included fabricated quotations from the complaint, suggesting nonexistent factual allegations. Ruling/SanctionThe Court imposed a $2,500 sanction payable by December 30, 2024. Counsel was also required to notify the California State Bar of the sanction and file proof of notification and payment. The Court recognized mitigating factors (health issues, post-hoc corrective measures) but stressed the seriousness of the violations. Key Judicial ReasoningRule 11 requires attorneys to conduct a reasonable inquiry into both facts and law. Use of AI does not diminish this duty. Subjective good faith is irrelevant: violations occur even without intent to deceive. AI-generated filings must be reviewed with the same rigor as traditional submissions. |
||||||||
Matter of Weber | NY County Court (USA) | 10 October 2024 | Expert | MS Copilot | Unverifiable AI Calculation Process | AI-assisted Evidence Inadmissible; Affirmative Duty to Disclose AI Use for Evidence Established. | AI-generated evidence requires disclosure & reliability check; Expert must explain AI process; "Garbage in, garbage out." | |
AI UseIn a trust accounting proceeding, the objectant's damages expert testified that he used Microsoft Copilot (described as an AI chatbot) to cross-check his damages calculations presented in a supplemental report. Hallucination DetailsThe issue wasn't fabricated citations, but the reliability and verifiability of the AI's calculation process. The expert could not recall the specific prompts used, nor could he explain Copilot's underlying sources or methodology. He claimed using AI tools was generally accepted in his field but offered no proof. Ruling/SanctionThe court had already found the expert's analysis unreliable on other grounds, but specifically addressed the AI use. The court attempted to replicate the expert's results using Copilot itself, obtaining different outputs and eliciting warnings from Copilot about the need for expert verification before court use. The court held, potentially as an issue of first impression in that court, that counsel has an affirmative duty to disclose the use of AI in generating evidence prior to its introduction, due to AI's rapid evolution and reliability issues. AI-generated evidence would be subject to a Frye hearing (standard for admissibility of scientific evidence in NY). The expert's AI-assisted calculations were deemed inadmissible. Key Judicial ReasoningThe court emphasized the "garbage in, garbage out" principle, stressing the need for users to understand AI inputs and processes. It stated that the mere fact AI is used does not make its output admissible; reliability must be established. The lack of transparency regarding the AI's process was a key factor in finding the evidence unreliable. |
||||||||
Anonymous Spanish Lawyer | Tribunal Constitucional (Spain) | 9 September 2024 | Lawyer | Unidentified | 19 fabricated Constitutional Court decisions | Formal Reprimand (Apercibimiento) + Referral to Barcelona Bar for Disciplinary Action | Regardless of method (AI, database error, negligence), counsel bears full responsibility for verifying the reality of all cited legal authorities before submission to court | |
AI UseThe Court noted that the false citations could stem from AI, disorganized database use, or invention. Counsel claimed a database error but provided no evidence. The Court found the origin irrelevant: verification duty lies with the submitting lawyer. Hallucination DetailsNineteen separate fabricated citations to fictional Constitutional Court judgments. Fake quotations falsely attributed to those nonexistent decisions. Cited to falsely bolster claims of constitutional relevance in an amparo. Ruling/SanctionThe Constitutional Court unanimously found that the inclusion of nineteen fabricated citations constituted a breach of the respect owed to the Court and its judges under Article 553.1 of the Spanish Organic Law of the Judiciary. Issued a formal warning (apercibimiento) rather than a fine due to absence of prior offenses. Referred the matter to the Barcelona Bar for possible disciplinary proceedings Key Judicial ReasoningThe Court stressed that even absent express insults, fabricating authority gravely disrespects the judiciary’s function. Irrespective of whether AI was used or a database error occurred, the professional duty of diligent verification was breached. The Court noted that fake citations disrupt the court’s work both procedurally and institutionally. |
||||||||
Dukuray v. Experian Information Solutions, Inc. | S.D.N.Y. (USA) | 26 July 2024 | Pro Se Litigant | Unidentified | 3 fake case citations and fabricated case law descriptions | No sanction; Formal Warning Issued | Fake case citations = serious misconduct; further infractions will trigger sanctions even for pro se parties | |
AI UsePlaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation. Hallucination DetailsThree nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues. Ruling/SanctionThe court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks. Key Judicial ReasoningReliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused. |
||||||||
Iovino v. Michael Stapleton Associates, Ltd. | Western Virginia (USA) | 24 July 2024 | Lawyer | Claude + Westlaw / LexisNexis | 2 fake cases + fabricated quotations attributed to real cases | Show Cause Order re Potential Sanctions + Possible Bar Referral | Reliance on AI without verification is abusive filing under Rule 11; silence after fabrication exposed aggravates misconduct | |
AI UseThe court inferred the use of AI from the pattern of errors (fake cases and fabricated quotes) and opposing counsel’s explicit accusation ("ChatGPT run amok"). Plaintiff's counsel did not deny it or clarify origins, leaving the inference unchallenged. Hallucination DetailsTwo nonexistent cases cited, and fabricated quotations attributed to real cases:
Misreporting of Menocal case citation to imply relevance Ruling/SanctionThe court issued a show cause order demanding explanation why sanctions and/or bar disciplinary referrals should not be imposed. Silent failure to contest fabrication allegations worsened the finding. Following show cause proceedings, the court declined to sanction counsel. Key Judicial ReasoningThe judge emphasized that AI use does not lessen the lawyer’s duty to ensure accurate filings. Fabricated cases and misquotes are serious Rule 11 violations. Attorneys are responsible for vetting everything submitted to the court, regardless of source. Silence when fabrication is exposed constitutes further misconduct. |
||||||||
Anonymous v. NYC Department of Education | S.D.N.Y. (USA) | 18 July 2024 | Pro Se Litigant | Unidentified | Several nonexistent case citations and fabricated quotations | No sanction; Formal Warning Issued | Use of AI does not excuse false citations; next infraction will not be tolerated | |
AI UseThe plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI. Hallucination DetailsSeveral fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious. Ruling/SanctionNo sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency. Key Judicial ReasoningThe court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct. |
||||||||
X BV in Z v. Tax Inspector | The Hague Court of Appeal (Netherlands) | 26 June 2024 | Lawyer | ChatGPT | Use of ChatGPT-generated list of alleged "economically comparable vehicles" to support tax valuation claims; Court found method invalid and legally worthless. | Arguments rejected; No formal sanction but severe judicial criticism. | AI outputs cannot substitute for rigorous comparability tests required by EU and Dutch tax law; "average consumer" perception, not AI classification, governs. | |
AI UseThe appellant relied on ChatGPT to generate a list of ten "economically comparable" vehicles for purposes of arguing a lower trade-in value to reduce bpm (car registration tax). The Court noted this explicitly and criticized the mechanical reliance on AI outputs without human verification or contextual adjustment. Hallucination DetailsChatGPT produced a list of luxury and exotic cars supposedly comparable to a Ferrari 812 Superfast. The Court found that mere AI-generated association of vehicles based on "economic context and competition position" is insufficient under EU law principles requiring real-world comparability from the perspective of an average consumer. Ruling/SanctionThe Court rejected the appellant’s valuation arguments wholesale. It stressed that serious, human-verified reference vehicle comparisons were mandatory and that ChatGPT lists could not establish the legally required comparability standard under Dutch and EU law (Art. 110 TFEU). No monetary sanction imposed, but appellant’s entire case collapsed on evidentiary grounds. Key Judicial ReasoningThe Court reasoned that a list generated by an AI program like ChatGPT, without rigorous control or verification, is inadmissible for evidentiary purposes. AI outputs lack the nuanced judgment necessary to assess "similar vehicles" under Art. 110 TFEU and Dutch bpm tax rules. It underscored that the test is based on the perceptions of a human average consumer, not algorithmic proximity. |
||||||||
Grant v. City of Long Beach | 9th Cir. CA (USA) | 22 March 2024 | Lawyer | Unidentified | 2 Fake Cases, plus flawed summaries | Striking of Brief + Dismissal of Appeal | Fabricated citations as material failure to comply with appellate rules; outright dismissal appropriate | |
AI UseThe appellants’ lawyer submitted an opening brief riddled with hallucinated cases and mischaracterizations. The court did not directly investigate the technological origin but cited the systematic errors as consistent with known AI-generated hallucination patterns. Hallucination DetailsTwo cited cases were completely nonexistent. Additionally, a dozen cited decisions were badly misrepresented, e.g., Hydrick v. Hunter and Wall v. County of Orange were cited for parent–child removal claims when they had nothing to do with such issues. Ruling/SanctionThe Ninth Circuit struck the appellants' opening brief under Circuit Rule 28–1 and dismissed the appeal. The panel emphasized that fabricated citations and grotesque misrepresentations violate Rule 28(a)(8)(A) requirements for arguments with coherent citation support. Key Judicial ReasoningFabricated and misrepresented authorities defeat the appellate function. Counsel failed to provide even minimally reliable legal arguments. Attempts to explain at oral argument were evasive and inadequate, reinforcing the sanction. Dismissal was portrayed as necessary to preserve the integrity of appellate review and judicial economy. |
||||||||
Michael Cohen Matter | SDNY (USA) | 20 March 2024 | Non-Lawyer | Google Bard | 3 fake cases | No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied | Lawyers must verify client-provided research; Importance of AI literacy. | |
AI UseMichael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases. Hallucination DetailsCohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility. Ruling/SanctionJudge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits. Key Judicial ReasoningThe court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification. |
||||||||
X BV in Z v. Tax Inspector | The Hague Court of Appeal (Netherlands) | 5 March 2024 | Lawyer | ChatGPT | Use of ChatGPT outputs as evidence without clarity about prompts or verification; no fake cases cited, but reliance on unverifiable AI outputs for valuation arguments | Arguments discounted; No formal sanction but strong judicial criticism | Statements from ChatGPT, without disclosure of input prompts and absent evidentiary reliability, are legally worthless in judicial proceedings | |
AI UseThe appellant's authorized representative submitted arguments based on ChatGPT outputs attempting to challenge the tax valuation of real property. The representative failed to specify what exact queries were made to ChatGPT, rendering the outputs unverifiable and untrustworthy. Hallucination DetailsNo explicit fabricated case law was cited. Instead, the appellant relied on generalized, unverifiable statements produced by ChatGPT to contest the capitalization factor and COVID-19 valuation discounts applied by the tax authorities. Ruling/SanctionThe Court refused to attribute any evidentiary value to the ChatGPT-based arguments. It found that without disclosure of the input prompts and verification of AI outputs, the content was legally inadmissible as probative material. However, no sanctions were imposed, likely due to the novelty of the misuse and the lack of bad faith. Key Judicial ReasoningThe Court emphasized that judicial proceedings demand verifiable, fact-based arguments. AI outputs that lack transparency (particularly about the underlying prompt and methodology) cannot serve as a substitute for evidence. The judgment explicitly notes that reliance on ChatGPT statements without verifiability "does not affect" the Court’s reasoning or the tax authority's burden of proof. |
||||||||
Zhang v. Chen (Chong Ke Matter) | Canada | 23 February 2024 | Lawyer | ChatGPT | 2 Fake Cases | Reprimand; No Costs Order; Law Society Investigation Pending. | First Canadian case; AI no substitute for expertise; Competence in using tech tools critical. | |
AI UseVancouver lawyer Chong Ke used ChatGPT to assist in preparing a Notice of Application in a family law case concerning parental travel with children. Hallucination DetailsThe application included references to two fictitious cases generated by ChatGPT. Opposing counsel identified the non-existent cases. Ruling/SanctionMs. Ke informed the court she was unaware ChatGPT could be unreliable, had not verified the cases, and apologized. Justice D.M. Masuhara reprimanded the lawyer but rejected the opposing side's request for a special costs order against her. The Law Society of British Columbia confirmed it was investigating Ms. Ke's conduct. Key Judicial ReasoningJustice Masuhara stated clearly that "generative AI is still no substitute for the professional expertise that the justice system requires of lawyers" and emphasized that competence in selecting and using technology tools, including AI, is critical for maintaining the integrity of the justice system. The case served as Canada's first high-profile example of the issue, prompting warnings about the need for diligence. |
||||||||
J.G. v. NYC Department of Education | S.D.N.Y. (USA) | 22 February 2024 | Lawyer | GPT4 | Relied on ChatGPT-4 to argue fee rates | No formal sanction; Judicial Rebuke and Rate Discount in Fees Award | Reliance on AI like ChatGPT to justify professional fee claims is improper and irrelevant; professional judgment cannot be replaced by AI outputs | |
AI UseThe Cuddy Law Firm used ChatGPT-4 to purportedly validate and support its request for elevated attorney billing rates in its motion for attorneys’ fees under the IDEA. They invoked ChatGPT as a “cross-check” for the reasonableness of their requested rates ($550–$600 per hour for senior lawyers, $375–$425 for associates). Hallucination DetailsNo fake cases or authorities cited. However, the Court found the reliance on ChatGPT-4 wholly inappropriate, calling it “utterly and unusually unpersuasive” and emphasizing that ChatGPT’s conclusions lacked transparency, reliability, and any grounding in actual legal practice or precedent. The court compared this misuse to the notorious ChatGPT hallucination cases (Mata v. Avianca and Park v. Kim). Ruling/SanctionThe Court reduced the Cuddy Law Firm’s requested fee rates significantly (e.g., from $550 down to $400/hour for senior lawyers, and proportionately for others) and explicitly warned against using ChatGPT or similar tools as evidence in fee petitions. No financial sanction was imposed, but the Court expressed clear disdain and advised against repeating this practice. Key Judicial ReasoningThe Court reaffirmed that billing rates must be based on prevailing legal market conditions and judicial precedent, not unverifiable or speculative AI outputs. The opinion underscores that AI tools, absent verifiable support, cannot serve as evidence in legal argumentation for judicial decision-making. |
||||||||
Kruse v. Karlen | Miss. CA (USA) | 13 February 2024 | Pro Se Litigant | Unidentified | At least twenty-two fabricated case citations and multiple statutory misstatements. | Dismissal of Appeal + $10,000 Damages Awarded for Frivolous Appeal. | Self-representation does not excuse filing AI-generated hallucinations; parties must verify and certify filings under oath. | |
AI UseAppellant Karlen admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission. Hallucination DetailsOut of twenty-four total case citations in Karlen’s appellate brief:
Ruling/SanctionThe Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status. Key Judicial ReasoningThe Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers. |
||||||||
Smith v. Farwell | Massachusetts (USA) | 12 February 2024 | Lawyer | Unidentified | 3 fake cases | $2k Fine (Supervising Lawyer) | Failure of basic verification = sanctionable; Ignorance of AI risk less credible defense over time; Supervisory duty. | |
AI UseIn a wrongful death case, plaintiff's counsel filed four memoranda opposing motions to dismiss. The drafting was done by junior staff (an associate and two recent law school graduates not yet admitted to the bar) who used an unidentified AI system to locate supporting authorities. The supervising attorney signed the filings after reviewing them for style and grammar, but admittedly did not check the accuracy of the citations and was unaware AI had been used. Hallucination DetailsJudge Brian A. Davis noticed citations "seemed amiss" and, after investigation, could not locate three cases cited in the memoranda. These were fictitious federal and state case citations. Ruling/SanctionAfter being questioned, the supervising attorney promptly investigated, admitted the citations were fake and AI-generated, expressed sincere contrition, and explained his lack of familiarity with AI risks. Despite accepting the attorney's candor and lack of intent to mislead, Judge Davis imposed a $2,000 monetary sanction on the supervising counsel, payable to the court. Key Judicial ReasoningThe court found that sanctions were warranted because counsel failed to take "basic, necessary precautions" (i.e., verifying citations) before filing. While the sanction was deemed "mild" due to the attorney's candor and unfamiliarity with AI (distinguishing it from Mata's bad faith finding), the court issued a strong warning that a defense based on ignorance "will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known". The case underscores the supervisory responsibilities of senior attorneys. |
||||||||
Park v. Kim | 2nd. Cir. CA (USA) | 30 January 2024 | Lawyer | ChatGPT | One fake case citation in appellate briefing | Referral to Grievance Panel + Order to Disclose Misconduct to Client. | Citation of fake case violates Rule 11; professional judgment cannot be outsourced to AI. | |
AI UseCounsel admitted using ChatGPT to find supporting case law after failing to locate precedent manually. She cited a fictitious case (Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)) in the reply brief, never verifying its existence. Hallucination DetailsOnly one hallucinated case was cited in the reply brief: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014). When asked to produce the case, Counsel admitted it did not exist, blaming reliance on ChatGPT. Ruling/SanctionThe Court referred Counsel to the Second Circuit’s Grievance Panel for further investigation and possible discipline. Lee was ordered to furnish a copy of the decision (translated if necessary) to her client and to file certification of compliance. Key Judicial ReasoningThe Court emphasized that attorneys must personally verify the existence and accuracy of all authorities cited. Rule 11 requires a reasonable inquiry, and no technological novelty excuses failing to meet that standard. The Second Circuit cited Mata v. Avianca approvingly, confirming that citing fake cases amounts to abusing the adversarial system. |
||||||||
Matter of Samuel | NY Country Court (USA) | 11 January 2024 | Lawyer | Unidentified | Five flawed citations. | Striking of Filing + Sanctions Hearing Scheduled | Certifying AI-generated hallucinations is frivolous conduct; obligation to verify not displaced by technology | |
AI UseOsborne’s attorney, under time pressure, submitted reply papers heavily relying on a website or tool that used generative AI. The submission included fabricated judicial authorities presented without independent verification. No admission by the lawyer was recorded, but the court independently verified the error. Hallucination DetailsOf the six cases cited in the October 11, 2023 reply, five were found to be either fictional or materially erroneous. A basic Lexis search would have revealed the fabrications instantly. The court drew explicit comparisons to the Mata v. Avianca fiasco. Ruling/SanctionThe court struck the offending reply papers from the record and ordered the attorney to appear for a sanctions hearing under New York’s Rule 130-1.1. Potential sanctions include financial penalties or other disciplinary measures. Key Judicial ReasoningThe court emphasized that while the use of AI tools is not forbidden per se, attorneys must personally verify all outputs. The violation was deemed "frivolous conduct" because the lawyer falsely certified the validity of the filing. The judge stressed the dangers to the judicial system from fictional citations: wasting time, misleading parties, degrading trust in courts, and harming the profession’s reputation. |
||||||||
Harber v. HMRC | UK | 4 December 2023 | Pro Se Litigant | Unidentified | 9 Fake Tribunal Decisions | No Sanction on Litigant; Warning implied for lawyers. | First UK reported case; Emphasized waste of resources; Cited Mata harms; Accepted litigant's unawareness. | |
AI UseCatherine Harber, a self-represented taxpayer appealing an HMRC penalty, submitted a document citing nine purported First-Tier Tribunal decisions supporting her position regarding "reasonable excuse". She stated the cases were provided by "a friend in a solicitor's office" and acknowledged they might have been generated by AI. ChatGPT was mentioned as a likely source. Hallucination DetailsThe nine cited FTT decisions (names, dates, summaries provided) were found to be non-existent after checks by the Tribunal and HMRC. While plausible, the fake summaries contained anomalies like American spellings and repeated phrases. Some cited cases resembled real ones, but those real cases actually went against the appellant. Ruling/SanctionThe Tribunal factually determined the cited cases were AI-generated hallucinations. It accepted Mrs. Harber was unaware they were fake and did not know how to verify them. Her appeal failed on its merits, unrelated to the AI issue. No sanctions were imposed on the litigant. Key Judicial ReasoningThe Tribunal emphasized that submitting invented judgments was not harmless, citing the waste of public resources (time and money for the Tribunal and HMRC). It explicitly endorsed the concerns raised in the US Mata decision regarding the various harms flowing from fake opinions. While lenient towards the self-represented litigant, the ruling implicitly warned that lawyers would likely face stricter consequences. This was the first reported UK decision finding AI-generated fake cases cited by a litigant |
||||||||
Zachariah Crabill Disciplinary Case | Colorado S.Ct. (USA) | 21 November 2023 | Lawyer | ChatGPT | Fake/Incorrect Cases; Lied to Court | 90-day Actual Suspension (+ stayed term, probation) | Violates duties of competence, diligence, candor; Lying aggravates significantly. | |
AI UseAttorney Zachariah C. Crabill, relatively new to civil practice, used ChatGPT to research case law for a motion to set aside judgment, a task he was unfamiliar with and felt pressured to complete quickly. Hallucination DetailsCrabill included incorrect or fictitious case citations provided by ChatGPT in the motion without reading or verifying them. He realized the errors ("garbage" cases, per his texts) before the hearing but did not alert the court or withdraw the motion. Ruling/SanctionWhen questioned by the judge about inaccuracies at the hearing, Crabill falsely blamed a legal intern. He later filed an affidavit admitting his use of ChatGPT and his dishonesty, stating he "panicked" and sought to avoid embarrassment. He stipulated to violating professional duties of competence, diligence, and candor/truthfulness to the court. He received a 366-day suspension, with all but 90 days stayed upon successful completion of a two-year probationary period. This was noted as the first Colorado disciplinary action involving AI misuse. Key Judicial ReasoningThe disciplinary ruling focused on the combination of negligence (failure to verify, violating competence and diligence) and intentional misconduct (lying to the court, violating candor). While mitigating factors (personal challenges, lack of prior discipline) were noted in the stipulated agreement, the dishonesty significantly aggravated the offense. |
||||||||
Mescall v. Renaissance at Antiquity | Westner N.C. (USA) | 13 November 2023 | Pro Se Litigant | Unidentified | Unspecified concerns about AI-generated inaccuracies | No sanction; Warning and Leave to Amend Granted | AI use in pleadings can trigger ethical issues; litigants remain responsible for accuracy and coherence | |
AI UseDefendants alleged that portions of Plaintiff’s response to a motion to dismiss were AI-generated. Hallucination DetailsNo specific fabricated cases or fake quotations were identified. The concern was broader: incoherent and procedurally improper pleadings, compounded by apparent AI usage, which raised ethical red flags. Ruling/SanctionRather than imposing sanctions, the court granted the pro se plaintiff leave to amend the complaint. Plaintiff was warned to comply with procedural rules and to submit a coherent, consolidated amended complaint, or face dismissal. Key Judicial ReasoningThe judge stressed that AI use does not absolve pro se litigants of procedural compliance. Litigants must ensure pleadings are coherent, concise, and legally grounded, regardless of technological tools used. Courts cannot act as de facto advocates or reconstruct fragmented pleadings. |
||||||||
Morgan v. Community Against Violence | New Mexico (USA) | 23 October 2023 | Pro Se Litigant | Unidentified | Fake Case Citations | Partial Dismissal + Judicial Warning | Fake authority is Rule 11 violation; pro se status not license for AI misuse | |
AI UsePlaintiff, acting without counsel, submitted briefing that included multiple fabricated or erroneous judicial citations, likely generated by an AI tool used for research or drafting. While the tool itself is not named, the nature and clustering of errors mirror known AI output patterns. Hallucination DetailsCited cases included wholly nonexistent opinions (e.g., "Las Cruces Sun-News v. City of Las Cruces") and real case names with incorrect volume/reporting details (e.g., misattributed circuits or invented page numbers). The citations lacked verifiable authority and were flagged by the court as spurious upon review. Ruling/SanctionThe court dismissed several claims on substantive grounds but issued a sharp warning about the misuse of AI-generated legal citations. While no immediate sanctions were imposed, the judge explicitly referenced Mata v. Avianca and held this instance to be only the second federal case addressing AI hallucinations in pleadings. The plaintiff was cautioned that any recurrence would result in Rule 11 sanctions, including dismissal with prejudice. Key Judicial ReasoningThe opinion stressed that access to courts is not a license to submit fictitious legal materials. Rule 11(b) requires factual and legal support for all filings, and even pro se litigants must adhere to this baseline. The court emphasized judicial efficiency, fairness to the opposing party, and the reputational harm caused by false citations. The misuse of AI was implicitly treated as a form of recklessness or bad faith, not excused by technological ignorance |
||||||||
Thomas v. Pangburn | S.D. Ga. (USA) | 6 October 2023 | Pro Se Litigant | Unidentified | At least ten fabricated case citations | Dismissal of Case as Sanction for Bad Faith + Judicial Rebuke | Filing papers with fake citations amounts to bad faith and justifies dismissal; AI is not an excuse for misconduct | |
AI UseJerry Thomas filed pro se pleadings citing at least ten fabricated cases. The citations appeared plausible but did not correspond to any real authorities. Despite opportunities to explain, Thomas gave vague excuses about "self-research" and "assumed reliability," without clarifying the sources - suggesting reliance on AI-generated content. Hallucination DetailsTen fake case citations systematically inserted across filings Fabricated authorities mimicked proper citation format but were unverifiable in any recognized database The pattern mirrored known AI hallucination behaviors: fabricated authorities presented with apparent legitimacy Ruling/SanctionThe Court dismissed the action with prejudice as a Rule 11 sanction. It emphasized that fake citations delay litigation, waste judicial resources, and erode public confidence. The Court explicitly invoked Mata v. Avianca for the broader dangers of AI hallucinations in litigation and found Thomas acted in bad faith by failing to properly explain the origin of the fabrications. Key Judicial ReasoningCiting fabricated cases (even if resulting from AI use or negligence) is sanctionable because it constitutes an improper purpose under Rule 11. Sanctions were deemed necessary to deter further abuses, with dismissal considered more appropriate than monetary penalties given the circumstances. |
||||||||
Ruggierlo et al. v. Lancaster | E.D. Mich. (USA) | 11 September 2023 | Pro Se Litigant | Unidentified | At least three fabricated case citations | No sanction; Formal Judicial Warning | Citing hallucinated cases wastes judicial resources and may result in sanctions; pro se status does not excuse fictitious legal filings | |
AI UseLancaster, filing objections to a magistrate judge’s Report and Recommendation, cited several fabricated case authorities. The Court noted the possibility of reliance on a generative AI tool and explicitly warned Lancaster about future misconduct. Hallucination DetailsFabricated or mutant citations, including:
Court highlighted that the majority of the cited cases in Lancaster’s objections were fake. Ruling/SanctionNo immediate sanction imposed due to pro se status and lack of prior warnings. However, the Court issued a pointed warning that citing "made-up law" could lead to significant sanctions, either in that Court or any other court to which the case might be remanded. Key Judicial ReasoningThe Court emphasized that unverified, fabricated legal citations undermine the judicial process and waste both judicial and litigant resources. Even without clear evidence of malicious intent, negligence in checking citations is sanctionable. Rule 11 duties apply fully to pro se litigants. |
||||||||
Ex Parte Lee | Texas CA (USA) | 19 July 2023 | Lawyer | Unidentified | 3 fake case citations | No sanction; Judicial Warning; Affirmance of Trial Court Decision | AI use does not excuse failure to verify citations; Rule 38.1(i) requires valid authorities and proper record citations | |
AI UseThe Court noted that the appellant's argument section appeared to have been drafted by AI based on telltale errors (nonexistent cases, jump-cites into wrong jurisdictions, illogical structure). A recent Texas CLE on AI usage was cited by the Court to explain the pattern. Hallucination DetailsThree fake cases cited. Brief also contained no citations to the record and was devoid of clear argumentation on the presented issues. Ruling/SanctionThe Court declined to issue a show cause order or to refer counsel to the State Bar of Texas, despite noting similarities to Mata v. Avianca. However, it affirmed the trial court’s denial of habeas relief due to inadequate briefing, and explicitly warned about the dangers of using AI-generated content in legal submissions without human verification. Key Judicial ReasoningThe Court held that even if AI contributed to the preparation of filings, attorneys must ensure accuracy, logical structure, and compliance with citation rules. Failure to meet these standards precludes appellate review under Tex. R. App. P. 38.1(i). Courts are not obligated to "make an appellant’s arguments for him," especially where brief defects are gross. |
||||||||
Mata v. Avianca, Inc | S.D.N.Y. (USA) | 22 June 2023 | Lawyers | ChatGPT | 6+ Fake Cases, Quotes, Citations; Fake Opinions | $5k Fine (Lawyers & Firm); Letters to Client/Judges | Bad faith for doubling down; Duty of candor; Gatekeeping role; Harms to system; Need verification. | |
AI UseAttorneys Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman used ChatGPT for legal research to oppose a motion to dismiss a personal injury claim against Avianca airlines, citing difficulty accessing relevant federal precedent through their limited research subscription. Hallucination DetailsThe attorneys' submission included at least six completely non-existent judicial decisions, complete with fabricated quotes and internal citations. Examples cited by the court include Varghese v. China Southern Airlines Co., Ltd., Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Inc., Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines, Inc.. When challenged by opposing counsel and the court, the attorneys initially stood by the fake cases and even submitted purported copies of the opinions, which were also generated by ChatGPT and contained further bogus citations. Ruling/SanctionJudge P. Kevin Castel imposed a $5,000 monetary sanction jointly and severally on the two attorneys and their law firm. He also required them to send letters informing their client and each judge whose name was falsely used on the fabricated opinions about the situation. Key Judicial ReasoningJudge Castel found the attorneys acted in bad faith, emphasizing their "acts of conscious avoidance and false and misleading statements to the Court" after the issue was raised. The sanctions were imposed not merely for the initial error but for the failure in their gatekeeping roles and their decision to "double down" rather than promptly correcting the record. The opinion detailed the extensive harms caused by submitting fake opinions. This case is widely considered a landmark decision and is frequently cited in subsequent discussions and guidance. |
||||||||
Scott v. Federal National Mortgage Association | Maine County (USA) | 14 June 2023 | Pro Se Litigant | Unidentified | Several fabricated case citations and fake quotations | Dismissal of Complaint + Sanctions (Attorney's Fees and Costs) | Blind reliance on AI does not excuse misrepresentation of law; pro se litigants held to same verification standards as attorneys | |
AI UseMr. Scott, opposing a motion to dismiss, filed a brief containing multiple fabricated case citations with plausible formatting but nonexistent underlying cases. Court recognized the pattern as typical of AI hallucinations. Scott did not admit AI use, but the inference was clear. Hallucination DetailsSeveral case names, reporter citations, and quotations provided were fake; no match could be found in legal databases. Quotations attached to these cases were invented. Citations appeared superficially valid (correct format) but were unverifiable Ruling/SanctionComplaint dismissed in full Sanctions imposed: Scott ordered to pay defendant’s reasonable attorney’s fees, costs, and expenses associated with the motion to dismiss and motion for sanctions Court required affidavit from Fannie Mae detailing fees, after which Scott could contest reasonableness but not the sanction itself Key Judicial ReasoningThe Court emphasized that using AI tools does not relieve any litigant of their duty to verify legal authorities. Citing or quoting nonexistent cases is a violation of Maine Rule of Civil Procedure 11. Even pro se litigants cannot "blindly rely" on AI outputs and are expected to exercise reasonable diligence. The judgment was framed explicitly to deter future abuse of AI-generated filings. |