AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (1345 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you have any questions about the database, a FAQ is available here.
And if you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

Click to Download CSV
Last updated: 23 April 2026
State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Saxena v. Martinez-Hernandez et al. D. Nev. (USA) 18 February 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
False Quotes Case Law (1)
Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor

AI Use

The plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them.

Hallucination Details

  • Spokane v. Douglass turned out to conflate unrelated decisions and misused citations from other cases
  • Hummel v. State could not be found in any Nevada or national database; citation matched an unrelated Oregon case

The court found no plausible explanation for these citations other than AI generation or outright fabrication.

Ruling/Sanction

The court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith.

Key Judicial Reasoning

Citing Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief".

Unnamed Brazilian litigant (Brazil) 18 February 2025 Lawyer ChatGPT Multiple fabricated case citations and doctrinal references Appeal partially granted (reintegration suspended, rent imposed), but litigant sanctioned for bad-faith litigation; 10% fine on the updated value of the case; copy of filing sent to OAB-SC for disciplinary review

AI Use

The appellant’s counsel admitted to having used ChatGPT, claiming the submission of false case law was the result of “unintentional use.” The fabricated citations were used in an appeal against a reintegration of possession order, in favor of the appellant’s stepmother and father’s heirs.

Hallucination Details

The brief contained numerous non-existent judicial precedents and references to legal doctrine that were either incorrect or entirely fictional. The court described them as “fabricated” and considered them serious enough to potentially mislead the court.

Ruling/Sanction

While the 6th Civil Chamber temporarily suspended the reintegration order, it further imposed a 10% fine on the value of the claim for bad-faith litigation and ordered that a copy of the appeal be forwarded to the Santa Catarina section of the Brazilian Bar Association (OAB/SC) for further investigation.

Key Judicial Reasoning

The court emphasized that the legal profession is a public calling entailing duties and responsibilities. It cautioned that AI must be used “with caution and restraint”. The chamber unanimously supported the sanction.

Re Nicholson Ontario SCJ (Canada) 18 February 2025 Pro Se Litigant Implied
Fabricated Case Law (3)
Misrepresented Case Law (1)
Geismayr v. The Owners, Strata Plan KAS 1970 Civil Resolution Tribunal (Canada) 14 February 2025 Pro Se Litigant Copilot
Fabricated Case Law (9)
Misrepresented Case Law (1)
Citations ignored
Goodchild v State of Queensland Queensland IRC (Australia) 13 February 2025 Pro Se Litigant "Internet searches"
Fabricated Case Law (5)
Relevant submissions ignored

"The Commission accepts the Applicant's explanation. Given that there appears to be significant doubt over whether the authorities cited by the Applicant represent actual decisions from the Fair Work Commission, I will give the authorities cited by the Applicant no weight in determining whether she has provided an explanation for the delay. This appears to be a salutary lesson for litigants in the dangers of relying on general search engines on the internet or artificial intelligence when preparing legal documents."

Luck v Commonwealth of Australia Federal Court (Australia) 11 February 2025
Fabricated Case Law (2)
The court dismissed the applicant's interlocutory application for disqualification and referral to a Full Court.
State v. Saher Ohio CC (USA) 11 February 2025 Lawyer ChatGPT
Fabricated Case Law (1)
Attorney asked to withdraw guilty plea

Initial filing also included part of the ChatGPT prompt.

QWYN and Commissioner of Taxation Administrative Review Tribunal of Australia (Australia) 5 February 2025 Lawyer Copilot
False Quotes Doctrinal Work (1)
The Tribunal affirmed the decision under review, rejecting the applicant's submissions based on the AI-generated content.

"The Applicant engaged the Copilot [Microsoft’s Artificial Intelligence product] in a range of probing questions pertaining to superannuation and taxation matters, upon which in part, it returned the following responses:

The Explanatory Memorandum to the Taxation Laws Amendment (Superannuation) Bill 1992, which introduced the new regime taxing superannuation benefits, states in paragraph 2.20 that “the Bill will provide a tax rebate of 15 per cent for disability superannuation pensions. This will apply to all disability pensions, irrespective of whether they are paid from a taxed or an untaxed source. The rebate recognises that disability pensions are paid as compensation for the loss of earning capacity and are not merely a form of retirement income.

  1. I have examined the Explanatory Memorandum to the Taxation Laws Amendment (Superannuation) Bill 1992. I was unable to locate any paragraph in that document in the same or similar terms to the paragraph generated by Copilot. It did not contain a paragraph 2.20.
  2. It has been noted by others that AI bots are prone to hallucinations.[35] That appears to be what has happened here. It is my assessment that submitting unverified material generated by AI, is not consistent with a party’s duty to use their best endeavours to assist the Tribunal to achieve its statutory objectives. To expect the Tribunal to read and consider material which a party does not know is authentic impedes the Tribunal’s attempts to provide a mechanism of review that ensures that applications are resolved as quickly and with as little expense as a proper consideration of the issues permits.
  3. Nothing in the remainder of the applicant’s submissions altered my view that the untaxed element of the benefit should be taxed under Subdivision 301-B."
Valu v. Minister for Immigration and Multicultural Affairs (Australia) 31 January 2025 Lawyer ChatGPT
Fabricated Case Law (17)
False Quotes Exhibits or Submissions (8)
Referral to Legal Services Commissioner

AI Use

Counsel used ChatGPT to generate a summary of cases for a submission, which included fictitious Federal Court decisions and invented quotes from a Tribunal ruling. He inserted this output into the brief without verifying the sources. Counsel later admitted this under affidavit, citing time pressure, health issues, and unfamiliarity with AI's risks. He noted that guidance from the NSW Supreme Court was only published after the filing.

Hallucination Details

The 25 October 2024 submission cited at least 16 completely fabricated decisions (e.g. Murray v Luton [2001] FCA 1245, Bavinton v MIMA [2017] FCA 712) and included supposed excerpts from the AAT’s ruling that did not appear in the actual decision. The Court and Minister’s counsel were unable to verify any of the cited cases or quotes.

Ruling/Sanction

Judge Skaros ordered referral to the OLSC under the Legal Profession Uniform Law (NSW) 2014, noting breaches of rules 19.1 and 22.5 of the Australian Solicitors’ Conduct Rules. The Court accepted Counsel’s apology and health-related mitigation but found that the conduct fell short of professional standards and posed systemic risks given increasing AI use in legal practice.

Key Judicial Reasoning

While acknowledging that Counsel corrected the record and showed contrition, the Court found that the damage—including wasted judicial resources and delay to proceedings—had already occurred. The ex parte email submitting corrected materials, without notifying opposing counsel, further compounded the breach. Given the public interest in safeguarding the integrity of litigation amidst growing AI integration, referral to the OLSC was deemed necessary, even without naming Counsel in the judgment.

Gonzalez v. Texas Taxpayers and Research Association W.D. Texas (USA) 29 January 2025 Lawyer Lexis Nexis's AI
Fabricated Case Law (4)
Misrepresented Case Law (1)
Plaintiff's response was stricken and monetary sanctions were imposed. 3961 USD

In the case of Gonzalez v. Texas Taxpayers and Research Association, the court found that Plaintiff's counsel, John L. Pittman III, included fabricated citations, miscited cases, and misrepresented legal propositions in his response to a motion to dismiss. Pittman initially denied using AI but later admitted to using Lexis Nexis's AI citation generator. The court granted the defendant's motion to strike the plaintiff's response and imposed monetary sanctions on Pittman, requiring him to pay $3,852.50 in attorney's fees and $108.54 in costs to the defendant. The court deemed this an appropriate exercise of its inherent power due to the abundance of technical and substantive errors in the brief, which inhibited the defendant's ability to efficiently respond.

Hanna v Flinders University South Australia (Australia) 29 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Olsen v Finansiel Stabilitet High Court (UK) 25 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Exhibits or Submissions (2), Legal Norm (2)
No contempt, but might bear out on costs
Fora Financial Asset Securitization v. Teona Ostrov Public Relations NY SC (USA) 24 January 2025 Lawyer Implied
Fabricated Case Law (1)
False Quotes Case Law (1)
Misrepresented Case Law (1)
Warning
Body by Michael Pty Ltd and Industry Innovation and Science Australia Administrative Review Tribunal (Australia) 24 January 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
False Quotes Doctrinal Work (1)
Misrepresented Legal Norm (4)
Fake references withdrawn before the hearing

"Nevertheless, due to that withdrawal being requested prior to the hearing, I have not considered those paragraphs, these reasons for decision do not take account of those paragraphs and I merely make some general comments below applicable to all parties that appear before the Tribunal.

The use of Chat GPT is problematic for the Tribunal. It perhaps goes without saying that it is not acceptable for a party to attempt to mislead the Tribunal by citing case law that is non-existent or citing legal conclusions that do not follow, whether that attempt is deliberate or otherwise. All parties should be aware that the Tribunal checks and considers all cases and conclusions referred to in both parties’ submissions in any event. This matter would have inevitably been discovered, and adverse inferences may have been drawn. To ensure no such adverse inferences are drawn, parties are encouraged to use publicly available databases to search for case law and not to seek to rely on artificial intelligence."

Strike 3 Holdings LLC v. Doe C.D. California (USA) 22 January 2025 Lawyer Ulokued
Fabricated Case Law (3)

Key Judicial Reasoning

Magistrate Judge Sheri Pym found the motion legally deficient on multiple grounds. In addition, she emphasized that counsel must not rely on fake or unverified authority. She cited Mata, Park, Gauthier, and others as cautionary examples of courts imposing sanctions for AI-generated hallucinations. The court reaffirmed that the use of AI does not lessen the duty to verify the existence and relevance of cited law.

Arajuo v. Wedelstadt et al E.D. Wisconsin (USA) 22 January 2025 Lawyer Unidentified
Fabricated Case Law (1)
Warning

AI Use

Counsel admitted using a “new legal research medium”, appears to be a generative AI system or platform capable of generating fictitious case law. Counsel did not deny using AI, but claimed the system may have been corrupted or unreliable. The amended filing removed the false authorities.

Hallucination Details

The court did not identify the specific fake cases but confirmed that “citations to non-existent cases” were included in Defendants’ original brief. Counsel’s subsequent filing corrected the record but did not explain how the citations passed into the brief in the first place.

Ruling/Sanction

Judge William Griesbach denied the motion for summary judgment on the merits, but addressed the citation misconduct separately. He cited Rule 11 and Park v. Kim (91 F.4th 610, 615 (2d Cir. 2024)) to underline the duty to verify. No formal sanctions were imposed, but counsel was explicitly warned that further use of non-existent authorities would not be tolerated.

Key Judicial Reasoning

The court emphasized that even if the submission of false citations was not malicious, it was still a serious breach of Rule 11 obligations. Legal contentions must be “warranted by existing law,” and attorneys are expected to read and confirm cited cases. The failure to do so, even if caused by AI use, is unacceptable. The court accepted counsel’s corrective effort but insisted that future violations would be sanctionable.

Candice Dias v Angle Auto Finance Fair Work Commission (Australia) 20 January 2025 Pro Se Litigant Implied
Fabricated Case Law (3)
Misrepresented Case Law (1)
United States v. Hayes E.D. Cal. (USA) 17 January 2025 Federal Defender Unidentified One fake case citation with fabricated quotation Formal Sanction Imposed + Written Reprimand

AI Use

Defense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible.

Hallucination Details

Francisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere.

Ruling/Sanction

The Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted.

Key Judicial Reasoning

The court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record.

Source: Volokh
Strong v. Rushmore Loan Management Services D. Nebraska (USA) 15 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1)
Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions
Kohls v. Ellison Minnesota (USA) 10 January 2025 Expert GPT-4o Fake Academic Citations Expert Declaration Excluded

AI Use

Professor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections.

Hallucination Details

The declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations.

Ruling/Sanction

Professor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable.

Key Judicial Reasoning

The court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted.

Source: Volokh
O’Brien v. Flick and Chamberlain S.D. Florida (USA) 10 January 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations

AI Use

Although O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.”

Ruling/Sanction

The court dismissed the case with prejudice on dual grounds:

  • The claims should have been raised as compulsory counterclaims in prior pending litigation and were thus procedurally barred under Rule 13(a)
  • O’Brien submitted fake legal citations, failed to acknowledge the issue candidly, violated local rules, and engaged in a pattern of procedural misconduct in this and other related litigation. While monetary sanctions were not imposed, the court granted the motion to strike and ordered dismissal with prejudice as both substantive and disciplinary remedy.

Key Judicial Reasoning

Judge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers.

Mavundla v. MEC High Court (South Africa) 8 January 2025 Lawyer Implied
Fabricated Case Law (9)
Misrepresented Case Law (4), Legal Norm (2)
Leave for appel dismissed with costs; referral to Legal Practice Council

AI Use

The judgment does not explicitly confirm that generative AI was used, but the judge strongly suspects ChatGPT or a similar tool was the source. The judge even ran prompts into ChatGPT and confirmed that the tool responded with fabricated support for the same fake cases used in the submission. Counsel blamed overwork and delegation to a candidate attorney (Ms. Farouk), who denied AI use but gave vague and evasive answers.

Hallucination Details

Fabricated or misattributed cases included:

  • Pieterse v. The Public Protector (no such case exists at cited location)
  • Burgers v. The Executive Committee..., Dube v. Schleich, City of Cape Town v. Aon SA, Makro Properties v. Raal, Standard Bank v. Lethole — none found in SAFLII or major reporters
  • Citations were often invented or misattributed to irrelevant decisions (e.g., a Competition Tribunal merger approval cited as support for service rules)

The supplementary notice of appeal included misleading summaries with no accurate paragraph citations, and no proper authority was ever provided for key procedural points.

Ruling/Sanction

  • Application for leave to appeal dismissed in full
  • Legal representatives ordered to pay costs of the 22 and 25 September 2024 appearances de bonis propriis
  • Judgment referred to the Legal Practice Council
  • Judge emphasized that the conduct went beyond the leniency shown in Parker v. Forsyth, as it involved unverified submissions in a signed court filing and then doubling down during oral argument.

Key Judicial Reasoning

Justice Bezuidenhout issued a lengthy and stern warning on the professional obligation to verify authorities. She held that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional,” and emphasized that even ignorance of AI’s flaws does not excuse unethical conduct. The judgment discusses comparative standards, ethical obligations, and recent literature in detail.

Buckeye Trust v. PCIT (India) 30 December 2024 Judge Implied
Misrepresented Case Law (2), Legal Norm (2)
Outdated Advice Repealed Law (1)
Judgment was retracted and case re-heard

Seemingly, the judge cited back hallucinated authorities invoked by one counsel. The Judgment was later reportedly withdrawn.

Al-Hamim v. Star Hearthstone Colorado (USA) 26 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (8)
No Sanction (due to pro se, contrition, etc.); Warning of future sanctions.

AI Use

Alim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court.

Hallucination Details

The appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause.

Ruling/Sanction

Al-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions.

Key Judicial Reasoning

Factors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status.

Duarte v. City of Richmond British Columbia Human Rights Tribunal (Canada) 18 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning

Nathan Duarte, a pro se litigant, filed a complaint against the City of Richmond alleging discrimination based on political beliefs. During the proceedings, Duarte cited three cases to support his claim that union affiliation is a protected characteristic. However, neither the City nor the Tribunal could locate these cases, leading to the suspicion that they were fabricated, possibly by a generative AI tool. The court held:

"While it is not necessary for me to determine if Mr. Duarte intended to mislead the Tribunal, I cannot rely on these “authorities” he cites in his submission. At the very least, Mr. Duarte has not followed the Tribunal’s Practice Direction for Legal Authorities, which requires parties, if possible, to provide a neutral citation so other participants can access a copy of the authority without cost. Still, I am compelled to issue a caution to parties who engage the assistance of generative AI technology while preparing submissions to the Tribunal, in case that is what occurred here. AI tools may have benefits. However, such applications have been known to create information, including case law, which is not derived from real or legitimate sources. It is therefore incumbent on those using AI tools to critically assess the information that it produces, including verifying the case citations for accuracy using legitimate sources. Failure to do so can have serious consequences. For lawyers, such errors have led to disciplinary action by the Law Society: see for example, Zhang v Chen, 2024 BCSC 285. Deliberate attempts to mislead the Tribunal, or even careless submission of fabricated information, could also form the basis for an award of costs under s. 37(4) of the Code. The integrity of the Tribunal’s process, and the justice system more broadly, requires parties to exercise diligence in ensuring that their engagement with artificial intelligence does not supersede their own judgement and credibility."

Letts v. Avidien Technologies E.D. N. Carolina (USA) 16 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (2)
Warning
Hamdan v. the National Insurance Institute Magistrate Court (Israel) 12 December 2024 Lawyer Unidentified
Fabricated Case Law (4)
Misrepresented Case Law (1)
Petition dismissed; ₪1,000 costs imposed for procedural misconduct and reliance on fictitious case law 1000 ILS

AI Use

Counsel admitted the fictitious citations originated from an “online legal database commonly used by lawyers.” Though the platform is unnamed, the court ruled out the standard legal database Nevo and concluded the “source of the hallucination is unclear.” Counsel apologized and claimed no intent to mislead.

Hallucination Details

The motion cited ten fabricated decisions—each with full party names, court locations, file numbers, and dates—purportedly showing that indirect child support debts owed to the National Insurance Institute could be discharged in bankruptcy. The court could not find a single one in any judicial database and ordered counsel to produce them. When he failed, he admitted they were inauthentic. The only real cited case (Skok) did not support the petitioner’s position.

Ruling/Sanction

The court dismissed the petition after finding that: (i) the cited decisions were fabricated; (ii) the only valid case did not support the argument; and (iii) under Israel’s Bankruptcy Ordinance, child support debts are not dischargeable by default. Despite the state’s failure to respond, the judge ruled sua sponte and imposed ₪1,000 in costs for procedural abuse.

Key Judicial Reasoning

Judge Saharai held that even if the hallucinated cases were cited inadvertently, their submission constituted a grave failure to meet professional obligations. He emphasized that a court cannot function when presented with legal fictions dressed up as precedent. The decision cited the attorney’s duty under section 54 of the Bar Law (1961) and ethics rules 2 and 34.

Mojtabavi v. Blinken C.D. California (USA) 12 December 2024 Pro Se Litigant Unidentified Multiple fake cases Case dismissed with prejudice
John Coulsto et al. v Elliott The High Court (Ireland) 10 December 2024 Pro Se Litigant implied
Outdated Advice Repealed Law (1)
Court rejected the submission as fallacious

Defendants' written submissions (not argued at trial) advanced that s.19 of the Conveyancing Act 1881 had been repealed by the 2009 Act, undermining the power to appoint a receiver. The court found the argument fallacious, noted s.19 was reinstated by the 2013 Act, and observed the submissions were likely produced by a generative AI or an unqualified adviser.

Crypto Open Patent Alliance v. Wright (1) High Court (UK) 6 December 2024 Pro Se Litigant Unknown
Fabricated Case Law (1), Exhibits or Submissions (1)
False Quotes Case Law (1)
Misrepresented Case Law (1), Exhibits or Submissions (1)
No formal sanction; fabricated citations disregarded

AI Use

Dr. Wright, representing himself, submitted numerous case citations in support of an application for remote attendance at an upcoming contempt hearing. COPA demonstrated that most of the authorities cited did not contain the quoted language—or were entirely unrelated. The judge agreed, noting these were likely "AI hallucinations by ChatGPT."

Later on, the Court of Appeal declined permission to appeal (finding that "Dr Wright’s grounds of appeal, skeleton argument and summary of skeleton argument themselves contain multiple falsehoods, including reliance upon fictitious authorities such as “Anderson v the Queen [2013] UKPC 2” which appear to be AI-generated hallucinations"). This led the Court to order him to pay costs of 100,000 GBP.

Carlos E. Gutierrez v. In Re Noemi D. Gutierrez Fl. 3rd District CA (USA) 4 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
False Quotes Case Law (1)
Appeals dismissed as sanction; Appellant barred from future pro se filings in related probate matters without attorney signature

AI Use

The court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.”

Hallucination Details

The court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated.

Ruling/Sanction

Dismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings

Key Judicial Reasoning

The Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations.

Rubio v. District of Columbia DHS D.C. DC (USA) 3 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (4)
Misrepresented Case Law (1)
Complaint dismissed with prejudice; no Rule 11 sanctions imposed, but clear judicial warning on AI misuse and citation verification duties

AI Use

Plaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).”

Hallucination Details

The court could not locate the following cited cases:

  • Ford v. District of Columbia, 70 F.3d 231 (D.C. Cir. 1995)
  • Davis v. District of Columbia, 817 A.2d 1234 (D.C. 2003)
  • Ward v. District of Columbia, 818 A.2d 27 (D.C. 2003)
  • Reese v. District of Columbia, 37 A.3d 232 (D.C. 2012)

These were used to allege a pattern of constitutional violations by the District but were found to be fabricated.

Ruling/Sanction

The court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse.

Key Judicial Reasoning

The Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice.

Gauthier v. Goodyear Tire & Rubber Co. E.D. Tex. (USA) 25 November 2024 Lawyer Claude
Fabricated Case Law (2)
False Quotes Case Law (7)
Monetary fine + Mandatory AI-related CLE Course + Disclosure to Client 2000 USD

AI Use

Monk admitted using the Claude AI tool to draft a summary judgment opposition without adequately verifying the case citations or quotations. He later claimed to have attempted post-hoc verification through Lexis AI but did not correct the errors until after a judicial show cause order.

Hallucination Details

Cited two completely nonexistent cases. Also fabricated quotations attributed to real cases, including Morales v. SimuFlite, White v. FCI USA, Burton v. Freescale, among others. Several "quotes" did not appear anywhere in the cited opinions.

Ruling/Sanction

The court imposed a $2,000 fine, ordered Monk to complete at least one hour of CLE on generative AI in legal practice, and mandated formal disclosure of the sanctions order to his client. It also permitted amendment of the defective filing but warned of the severity of the misconduct.

Key Judicial Reasoning

The court emphasized that attorneys remain personally responsible for the verification of all filings under Rule 11, regardless of technology used. Use of AI does not dilute the duty of candor. Continued silence and failure to rectify errors after opposing counsel flagged them exacerbated the misconduct.

Leslie v. IQ Data International N.D. Georgia (USA) 24 November 2024 Pro Se Litigant Implied Citation to nonexistent authorities Background action dismissed with prejudice, but no monetary sanction
Wikeley v Kea Investments Ltd (New Zealand) 21 November 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
Referred to guidance about AI
Monster Energy Company v. Pacific Smoke International Inc. Canadian Intellectual Property Office (Canada) 20 November 2024 Lawyer
Fabricated Case Law (1)
The fabricated citation was disregarded by the court.

In a trademark opposition case between Monster Energy Company and Pacific Smoke International Inc., the Applicant, Pacific Smoke, cited a non-existent case, 'Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7', in support of its argument. This was identified as an AI hallucination by the court. The court disregarded this citation and reminded the Applicant of the seriousness of relying on false citations, whether accidental or AI-generated.

Berry v. Stewart D. Kansas (USA) 14 November 2024 Lawyer Unidentified
Fabricated Case Law (1), Exhibits or Submissions (1)
At hearing, Counsel pledged to reimburse other side and his client

In the November 2024 Show Cause Order, Judge Robinson noted that: "First, the briefing does not cite the forum-selection clause from the contract between the parties; instead, it cites and quotes a forum-selection clause that appears nowhere in the papers submitted by the parties. Second, Defendant’s reply brief includes a citation, Hogan v. Allstate Insurance Co., No. 19-CV-00262-JPM, 2020 WL 1882334 (D. Kan. Apr. 15, 2020), in which the court purportedly “transferred a case to the Southern District of Texas because the majority of the witnesses were located in Texas. The court found that the burden on the witnesses outweighed the convenience of litigating the case in Kansas.” As far as the Court can tell, this case does not exist. The Westlaw database number pulls up no case; the Court has found no case in CM/ECF between the parties “Hogan” and “Allstate Insurance Co.” Moreover, docket numbers in this district have at least four digits—not three—after the case-type designation, and there is no judge in this district with the initials “JPM.”"

During the show cause hearing (Transcript), Counsel apologised and pledged to reimburse the other side's costs, as well as his client's.

Kaur v RMIT SC Victoria (CA) (Australia) 11 November 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
CARB 188903M-2024 (Calgary Assessment Review Board) Calgary ARB (Canada) 7 November 2024 Lawyer Implied
Fabricated Case Law (3)
Source: Courtready
Vargas v. Salazar S.D. Texas (USA) 1 November 2024 Pro Se Litigant Implied Fake citations Plaintiff ordered to refile submissions without fake citations
Rajabi c. Lassalle TAL Montréal (Canada) 1 November 2024 Pro Se Litigant Implied
Fabricated Case Law (2)
Source: Courtready
Churchill Funding v. 732 Indiana SC Cal (USA) 31 October 2024 Lawyer Implied
Fabricated Case Law (1)
Misrepresented Case Law (1), Legal Norm (1)
Order to show cause
Source: Volokh
Mortazavi v. Booz Allen Hamilton, Inc. C.D. Cal. (USA) 30 October 2024 Lawyer Unidentified
Fabricated Case Law (1)
False Quotes Exhibits or Submissions (1)
$2,500 Monetary Sanction + Mandatory Disclosure to California State Bar

AI Use

Plaintiff’s counsel admitted using generative AI to draft a motion to remand without independently verifying the legal citations or the factual accuracy of quoted complaint allegations.

Hallucination Details

Cited a fabricated case (details of the specific case name not listed in the ruling). Included fabricated quotations from the complaint, suggesting nonexistent factual allegations.

Ruling/Sanction

The Court imposed a $2,500 sanction payable by December 30, 2024. Counsel was also required to notify the California State Bar of the sanction and file proof of notification and payment. The Court recognized mitigating factors (health issues, post-hoc corrective measures) but stressed the seriousness of the violations.

Key Judicial Reasoning

Rule 11 requires attorneys to conduct a reasonable inquiry into both facts and law. Use of AI does not diminish this duty. Subjective good faith is irrelevant: violations occur even without intent to deceive. AI-generated filings must be reviewed with the same rigor as traditional submissions.

Thomas v. Commissioner of Internal Revenue United States Tax Court (USA) 23 October 2024 Lawyer, Paralegal Implied
Misrepresented Case Law (3)
Pretrial Memorandum stricken

The lawyer for the petitioner admitted to not reviewing the memorandum, which was prepared by a paralegal. The court deemed the Pretrial Memorandum stricken but did not impose a monetary penalty, considering the economic situation of the petitioner and the lawyer's service to a client who might otherwise be unrepresented. It was also pertinent that the law being stated was accurate (even if the citations were wrong).

Matter of Weber NY County Court (USA) 10 October 2024 Expert MS Copilot Unverifiable AI Calculation Process AI-assisted Evidence Inadmissible; Affirmative Duty to Disclose AI Use for Evidence Established.

AI Use

In a trust accounting proceeding, the objectant's damages expert testified that he used Microsoft Copilot (described as an AI chatbot) to cross-check his damages calculations presented in a supplemental report.

Hallucination Details

The issue wasn't fabricated citations, but the reliability and verifiability of the AI's calculation process. The expert could not recall the specific prompts used, nor could he explain Copilot's underlying sources or methodology. He claimed using AI tools was generally accepted in his field but offered no proof.

Ruling/Sanction

The court had already found the expert's analysis unreliable on other grounds, but specifically addressed the AI use. The court attempted to replicate the expert's results using Copilot itself, obtaining different outputs and eliciting warnings from Copilot about the need for expert verification before court use. The court held, potentially as an issue of first impression in that court, that counsel has an affirmative duty to disclose the use of AI in generating evidence prior to its introduction, due to AI's rapid evolution and reliability issues. AI-generated evidence would be subject to a Frye hearing (standard for admissibility of scientific evidence in NY). The expert's AI-assisted calculations were deemed inadmissible.

Key Judicial Reasoning

The court emphasized the "garbage in, garbage out" principle, stressing the need for users to understand AI inputs and processes. It stated that the mere fact AI is used does not make its output admissible; reliability must be established. The lack of transparency regarding the AI's process was a key factor in finding the evidence unreliable.

Iovino v. Michael Stapleton Associates, Ltd. Western Virginia (USA) 10 October 2024 Lawyer Claude, Westlaw, LexisNexis
Fabricated Case Law (2)
False Quotes Case Law (2)
Misrepresented Case Law (1)
No sanction, but hearing transcript sent to bar authorities

Show cause order is here. Counsel responded to Show Cause order in this document. Show cause hearing transcript is here.

Jones v. Simploy Missouri CA (USA) 24 September 2024 Pro Se Litigant Implied Fake citations Warning

The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court.

We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54.

We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity."

Martin v. Hawai D. Hawaii (USA) 20 September 2024 Pro Se Litigant Unidentified
Fabricated Case Law (2)
False Quotes Case Law (2)
Misrepresented Legal Norm (2)
Warning, and Order to file further submissions with Declaration
Anonymous Spanish Lawyer Tribunal Constitucional (Spain) 9 September 2024 Lawyer Unidentified 19 fabricated Constitutional Court decisions Formal Reprimand (Apercibimiento) + Referral to Barcelona Bar for Disciplinary Action

AI Use

The Court noted that the false citations could stem from AI, disorganized database use, or invention. Counsel claimed a database error but provided no evidence. The Court found the origin irrelevant: verification duty lies with the submitting lawyer.

Hallucination Details

Nineteen separate fabricated citations to fictional Constitutional Court judgments. Fake quotations falsely attributed to those nonexistent decisions. Cited to falsely bolster claims of constitutional relevance in an amparo.

Ruling/Sanction

The Constitutional Court unanimously found that the inclusion of nineteen fabricated citations constituted a breach of the respect owed to the Court and its judges under Article 553.1 of the Spanish Organic Law of the Judiciary. Issued a formal warning (apercibimiento) rather than a fine due to absence of prior offenses. Referred the matter to the Barcelona Bar for possible disciplinary proceedings

Key Judicial Reasoning

The Court stressed that even absent express insults, fabricating authority gravely disrespects the judiciary’s function. Irrespective of whether AI was used or a database error occurred, the professional duty of diligent verification was breached. The Court noted that fake citations disrupt the court’s work both procedurally and institutionally.

Transamerica Life v. Williams D. Arizona (USA) 6 September 2024 Pro Se Litigant Implied
Fabricated Case Law (4)
Misrepresented Legal Norm (1)
Warning