AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (831 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you have any questions about the database, a FAQ is available here.
And if you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

Click to Download CSV
Last updated: 5 May 2026
State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Nichols v. Walmart S.D. Georgia (USA) 23 April 2025 Pro Se Litigant Implied Multiple fictitious legal citations Case dismissed for lack of subject matter jurisdiction and as a Rule 11 sanction for bad-faith submission of fabricated legal authorities

Solution upheld on appeal (see here).

Brown v. Patel et al. S.D. Texas (USA) 22 April 2025 Pro Se Litigant Unidentified
Fabricated Case Law (1)
Misrepresented Case Law (2)
Warning

Although no immediate sanctions were imposed, Magistrate Judge Ho explicitly warned Plaintiff that future misconduct of this nature may violate Rule 11 and lead to consequences.

Goshen Multiservice Limited v Accuro Environmental Limited Employment Tribunals (London, South) (UK) 22 April 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Rowe v National Australia Bank Ltd South Australia (Australia) 17 April 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Ferris v. Amazon.com Services N.D. Mississippi (USA) 16 April 2025 Pro Se Litigant ChatGPT 7 fictitious cases Plaintiff ordered to pay Defendant’s reasonable costs related to addressing the fabricated citations

AI Use

Mr. Ferris admitted at the April 8, 2025 hearing that he used ChatGPT to generate the legal content of his filings and even the statement he read aloud in court. The filings included at least seven entirely fictitious case citations. The court noted the imbalance: it takes a click to generate AI content but substantial time and labor for courts and opposing counsel to uncover the fabrications.

Hallucination Details

The hallucinated cases included federal circuit and district court decisions, complete with plausible citations and jurisdictional diversity, crafted to lend credibility to Plaintiff’s intellectual property and employment-related claims. These false authorities were submitted both in the complaint and in opposition to Amazon’s motion to dismiss.

Ruling/Sanction

The court found a Rule 11 violation and, while initially inclined to dismiss the case outright, chose instead to impose a compensatory monetary sanction. Amazon is entitled to submit a detailed affidavit of costs directly attributable to rebutting the false citations. The final monetary amount will be set in a subsequent order.

Key Judicial Reasoning

Judge Michael P. Mills condemned the misuse of generative AI as a serious threat to judicial integrity. Quoting Kafka (“The lie made into the rule of the world”), the court lamented the rise of “a post-truth world” and framed Ferris as an “avatar” of that dynamic. Nevertheless, it opted for the least severe sanction consistent with deterrence and fairness: compensatory costs under Rule 11.

Sims v. Souily-Lefave D. Nevada (USA) 15 April 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning
Graciela Dela Torre v. Davies Life & Health, Inc., et al. N.D. Illinois (USA) 11 April 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Misrepresented Case Law (5), Legal Norm (2)
Bischoff v. South Carolina Department of Education Admin Law Court, S.C. (USA) 10 April 2025 Pro Se Litigant Implied Fake citations Warning

The court held that: "It is likely that Appellant employed argument generated by an artificial intelligence (AI) program which contained the fictitious case citation and cautions Appellant that many harms flow from the use of non-existent case citations and fake legal authority generated by AI programs, including but not limited to the waste of judicial resources and time and waste of resources and time of the opposing party. Were courts to unknowingly rely upon fictitious citations, citizens and future litigants might question the validity of court decisions and the reputation of judges. If, alternatively, Appellant's use of a fictitious case was not the result of using an AI program, but was instead a conscious act of the Appellant, Appellant's action could be deemed a fraud on the Court. Appellant is hereby expressly warned that submission of fictitious case authorities may subject Appellant to sanctions under the S.C. Frivolous Proceedings Act, S.C. Code Ann. § 15-36-10(Supp. 2024)."

Daniel Jaiyong An v. Archblock, Inc. Delaware Chancery (USA) 3 April 2025 Pro Se Litigant Implied
False Quotes Case Law (2)
Misrepresented Case Law (2)
Motion denied with prejudice; no immediate sanction imposed, but petitioner formally warned and subject to future certification and sanctions

AI Use

The petitioner submitted a motion to compel discovery that contained several fabricated or misleading citations. The court explicitly stated that the motion bore hallmarks of generative AI use and referenced ChatGPT’s known risk of “hallucinations.” Although the petitioner did not admit AI use, the court found the origin clear and required future filings to include a GenAI usage certification.

Hallucination Details

Examples included:

  • Terramar Retail Centers, LLC v. Marion #2-Seaport Trust – cited for discovery principles it did not contain
  • Deutsch v. ZST Digital Networks, Inc. – quoted for a sentence not found in the opinion
  • Production Resources Group, LLC v. NCT Group, Inc. – attributed with a quote that appears nowhere in the case or legal databases

Court verified via Westlaw that some phrases returned only the petitioner’s motion as a result.

Ruling/Sanction

Motion to compel denied with prejudice. No immediate monetary sanction imposed, but petitioner was warned that further submission of fabricated authority may result in sanctions including monetary penalties or dismissal. Future filings must include a certification regarding the use of generative AI.

Key Judicial Reasoning

The Vice Chancellor emphasized that GenAI can benefit courts and litigants, but careless use that results in fictitious legal authorities wastes resources and harms judicial integrity.

Zzaman v. HMRC (UK) 3 April 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Misrepresented Case Law (7), Legal Norm (2)
Warning

Plaintiff had disclosed the use of AI in preparing his statement of case. The court noted:

"29. However, our conclusion was that Mr Zzaman’s statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman’s statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC)). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate.

Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully “understand” what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal).

There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court’s time and that of opposing parties."

Bangholme Investments Pty Ltd v Greater Dandenong CC Victorian CAT (Australia) 3 April 2025 Pro Se Litigant Unidentified
Misrepresented Legal Norm (1)

Alan Hood relied on an AI search that inferred the Council was required to notify objectors. The Tribunal found that inference 'plainly incorrect', noting Hood had received the requisite notice and should have read the documents; the Tribunal nevertheless exercised discretion to join him.

Boggess v. Chamness E.D. Texas (USA) 1 April 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Argument ignored
Source: Jesse Schaefer
Sanders v. United States Fed. claims court (USA) 31 March 2025 Pro Se Litigant Implied
Fabricated Case Law (4)
Misrepresented Case Law (1), Legal Norm (1)
Warning

AI Use

The plaintiff did not admit to using AI, but the court inferred likely use due to the submission of fabricated citations matching the structure and behavior typical of generative AI hallucinations. The decision referenced public concerns about AI misuse and cited specific examples of federal cases where similar misconduct occurred.

Hallucination Details

Plaintiff cited:

  • Tucker v. United States, 24 Cl. Ct. 536 (1991) – does not exist
  • Fargo v. United States, 184 F.3d 1096 (Fed. Cir. 1999) – fabricated citation pointing to an unrelated Ninth Circuit case
  • Bristol Bay Native Corporation v. United States, 87 Fed. Cl. 122 (2009) – fictional
  • Quantum Construction, Inc. v. United States, 54 Fed. Cl. 432 (2002) – nonexistent
  • Hunt Building Co., LLC v. United States, 61 Fed. Cl. 243 (2004) – real case misused; contains no mention of unjust enrichment

Ruling/Sanction

The court granted the government’s motion to dismiss for lack of subject matter jurisdiction under Rule 12(b)(1). Although the court found a clear Rule 11 violation, it opted not to sanction the plaintiff, citing the evolving context of AI use and the absence of bad faith. A formal warning was issued, with notice that future hallucinated filings may trigger sanctions.

Key Judicial Reasoning

Judge Roumel noted that plaintiff’s attempt to rely on fictional case law was a misuse of judicial resources and a disservice to her own advocacy. The court cited multiple precedents addressing hallucinated citations and AI misuse, stating clearly that while leeway is granted to pro se litigants, the line is crossed when filings rely on fictitious law.

McKeown v. Paycom Payroll LLC W.D. Oklahoma (USA) 31 March 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Submission stricken out, and warning

AI Use

Although AI was not named and Plaintiff denied intentional fabrication, the court considered the citation (Adamov, 779 F.3d 851, 860 (8th Cir. 2015)) to be plainly fictitious. It noted the possibility that Plaintiff used generative AI tools, given the fabricated citation's plausible-sounding structure and mismatch with existing precedent.

Hallucination Details

Plaintiff submitted fabricated legal authorities in at least two filings, despite being explicitly warned by the court after the first incident. The false case cited in her sur-reply could not be located in any legal database. When asked to produce it, she responded that she had likely “garbled” the citation but provided no plausible alternative or correction.

Ruling/Sanction

The court declined to dismiss the action as a sanction, citing the limitations pro se litigants face in accessing reliable legal research tools. However, it granted the defendant’s motion to strike Plaintiff’s two unauthorized sur-replies and formally warned her that further violations of Rule 11 would lead to sanctions, including monetary penalties, filing restrictions, or dismissal.

SQBox Solutions Ltd. v. Oak BC Civil Resolution Tribunal (Canada) 31 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
False Quotes Legal Norm (2)
Misrepresented Case Law (4)
Litigant lost on merits

"By relying on inaccurate and false AI submissions, Mr. Oak hurts his own case. I understand that Mr. Oak himself might not be aware that the submissions are misleading, but they are his submissions and he is responsible for them. "

Source: Steve Finlay
AQ v. BT CRT (Canada) 28 March 2025 Pro Se Litigant Implied
Fabricated Case Law (2), Legal Norm (1)
Misrepresented Case Law (1), Legal Norm (1)
Arguments ignored
LYJ v. Occupational Therapy Board of Australia Queensland (Australia) 26 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
No sanction; Fabrication noted; Warning issued regarding AI use

AI Use

The applicant cited Crime and Misconduct Commission v Chapman [2007] QCA 283 in support of a key submission. The Tribunal was unable to locate such a case. It queried ChatGPT, which returned a detailed but entirely fictitious account of a case that does not exist. The Tribunal attached Queensland’s AI usage guidelines to its reasons and emphasized that the responsibility for accuracy lies with the party submitting the material.

Ruling/Sanction

The fabricated case was disregarded. The Tribunal granted a stay but issued a strong warning: litigants are responsible for understanding the limitations of AI tools and must verify all submitted material. The inclusion of fictitious material wastes time, diminishes credibility, and undermines the process.

Key Judicial Reasoning

Citing non-existent authorities "weakens their arguments. It raises issues about whether their submission can be considered as accurate and reliable. It may cause the Tribunal to be less trusting of other submissions which they make. It wastes the time for Tribunal members in checking and addressing these hallucinations. It causes a significant waste of public resources."

Kruglyak v. Home Depot U.S.A., Inc. W.D. Virginia (USA) 25 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
Misrepresented Case Law (1)
No monetary sanctions; Warning

AI Use

Kruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources.

Hallucination Details

The plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones.

Ruling/Sanction

Magistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur.

Key Judicial Reasoning

The court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action.

Buckner v. Hilton Global W.D. Kentucky (USA) 21 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1), Exhibits or Submissions (1)
Warning

In a subsequent Order, the court pointed out that "This Court's opinion pointing out Buckner's citation to nonexistent case law, along with its implications, is an issue for appeal and not a valid basis for recusal. "

Williams v. Capital One Bank D. DC (USA) 18 March 2025 Pro Se Litigant CoCounsel
Fabricated Case Law (1)
Misrepresented Case Law (1)
Case dismissed with prejudice for failure to state a claim. No monetary sanction imposed, but the court issued a formal warning

AI Use

While not formally admitted, Plaintiff’s opposition brief referred to “legal generative AI program CoCounsel,” and the court noted that the structure and citation pattern were consistent with AI-generated output. Capital One was unable to verify several case citations, prompting the court to scrutinize the submission.

Hallucination Details

At least one case was fully fabricated, and another was a real case misattributed to the wrong jurisdiction and reporter. The court emphasized that it could not determine whether the mis-citations were the result of confusion, poor research, or hallucinated AI output—but the burden rested with the party filing them.

Ruling/Sanction

The court dismissed the complaint with prejudice, noting Plaintiff had already filed and withdrawn a prior version and had had full opportunity to amend. Though it did not impose monetary sanctions, it issued a strong warning and directed Plaintiff to notify other courts where he had similar pending cases if any filings included erroneous AI-generated citations.

Stevens v. BJC Health System Missouri CA (USA) 18 March 2025 Pro Se Litigant Implied 6 fabricated citations Warning
Alkuda v. McDonald Hopkins Co., L.P.A. N.D. Ohio (USA) 18 March 2025 Pro Se Litigant Implied Fake Citations Warning
LMN v. STC (No. 2) (New Zealand) 17 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning
Reddan & An Bord Pleanála v. Trustees of Nenagh Golf Club (Ireland) 13 March 2025 Pro Se Litigant Unidentified
Fabricated Exhibits or Submissions (1), other (1)
Misrepresented Legal Norm (4), other (1)
Application for Judicial Review Denied; Express Judicial Rebuke for Misuse of AI

AI Use

Justice Nolan suspected that Reddan's submissions, especially references to "subornation to perjury" and Constitutional Article 40 rights, were AI-generated, exhibiting typical hallucination patterns (pseudo-legal concepts, inappropriate cut-and-paste fragments). Reddan did not admit using AI but relied on internet-sourced legal arguments that closely resembled LLM-style outputs.

Hallucination Details

Inappropriate invocation of "subornation to perjury," a term foreign to Irish law. Constitutional and criminal law citations (Article 40, Non-Fatal Offences Against the Person Act) irrelevant to judicial review context. Assertions framed in hyperbolic, sensationalist terms without factual or legal basis. General incoherence of pleadings, consistent with AI-generated pseudo-legal text

Ruling/Sanction

The High Court refused leave to apply for judicial review on all nine grounds. While no formal financial sanction was imposed, Justice Nolan issued a sharp rebuke, highlighting the improper use of AI and warning against making scurrilous, unverified allegations in legal pleadings. The Court stressed that misuse of AI-generated material could itself amount to an abuse of the judicial process.

Mark Lillard v. Offit Kurman, P.A. SC Delaware (USA) 12 March 2025 Pro Se Litigant Unidentified
False Quotes Case Law (2)
Misrepresented Case Law (2)
AI-use certification required for future filings
Arnaoudoff v. Tivity Health Incorporated D. Arizona (USA) 11 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (3)
Misrepresented Case Law (1)
Court ignored fake citations and granted motion to correct the record
Sheets v. Presseller M.D. Florida (USA) 11 March 2025 Pro Se Litigant Implied Allegations by the other party that brief was AI-generated Warning
210S LLC v. Di Wu Hawaii (USA) 11 March 2025 Pro Se Litigant Implied Fictitious citation and misrepresentation Warning
Yu Hon Tong Thomas v Centaline Property Agency High Court (Hong Kong) 26 February 2025 Pro Se Litigant Unidentified
Fabricated Exhibits or Submissions (1)
Misrepresented Case Law (1)
Merz v. Kalama W.D. Washington (USA) 25 February 2025 Pro Se Litigant Unidentified
Misrepresented Legal Norm (2)
Saxena v. Martinez-Hernandez et al. D. Nev. (USA) 18 February 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
False Quotes Case Law (1)
Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor

AI Use

The plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them.

Hallucination Details

  • Spokane v. Douglass turned out to conflate unrelated decisions and misused citations from other cases
  • Hummel v. State could not be found in any Nevada or national database; citation matched an unrelated Oregon case

The court found no plausible explanation for these citations other than AI generation or outright fabrication.

Ruling/Sanction

The court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith.

Key Judicial Reasoning

Citing Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief".

Re Nicholson Ontario SCJ (Canada) 18 February 2025 Pro Se Litigant Implied
Fabricated Case Law (3)
Misrepresented Case Law (1)
Geismayr v. The Owners, Strata Plan KAS 1970 Civil Resolution Tribunal (Canada) 14 February 2025 Pro Se Litigant Copilot
Fabricated Case Law (9)
Misrepresented Case Law (1)
Citations ignored
Goodchild v State of Queensland Queensland IRC (Australia) 13 February 2025 Pro Se Litigant "Internet searches"
Fabricated Case Law (5)
Relevant submissions ignored

"The Commission accepts the Applicant's explanation. Given that there appears to be significant doubt over whether the authorities cited by the Applicant represent actual decisions from the Fair Work Commission, I will give the authorities cited by the Applicant no weight in determining whether she has provided an explanation for the delay. This appears to be a salutary lesson for litigants in the dangers of relying on general search engines on the internet or artificial intelligence when preparing legal documents."

Hanna v Flinders University South Australia (Australia) 29 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Olsen v Finansiel Stabilitet High Court (UK) 25 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Exhibits or Submissions (2), Legal Norm (2)
No contempt, but might bear out on costs
Body by Michael Pty Ltd and Industry Innovation and Science Australia Administrative Review Tribunal (Australia) 24 January 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
False Quotes Doctrinal Work (1)
Misrepresented Legal Norm (4)
Fake references withdrawn before the hearing

"Nevertheless, due to that withdrawal being requested prior to the hearing, I have not considered those paragraphs, these reasons for decision do not take account of those paragraphs and I merely make some general comments below applicable to all parties that appear before the Tribunal.

The use of Chat GPT is problematic for the Tribunal. It perhaps goes without saying that it is not acceptable for a party to attempt to mislead the Tribunal by citing case law that is non-existent or citing legal conclusions that do not follow, whether that attempt is deliberate or otherwise. All parties should be aware that the Tribunal checks and considers all cases and conclusions referred to in both parties’ submissions in any event. This matter would have inevitably been discovered, and adverse inferences may have been drawn. To ensure no such adverse inferences are drawn, parties are encouraged to use publicly available databases to search for case law and not to seek to rely on artificial intelligence."

Candice Dias v Angle Auto Finance Fair Work Commission (Australia) 20 January 2025 Pro Se Litigant Implied
Fabricated Case Law (3)
Misrepresented Case Law (1)
Strong v. Rushmore Loan Management Services D. Nebraska (USA) 15 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1)
Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions
O’Brien v. Flick and Chamberlain S.D. Florida (USA) 10 January 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations

AI Use

Although O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.”

Ruling/Sanction

The court dismissed the case with prejudice on dual grounds:

  • The claims should have been raised as compulsory counterclaims in prior pending litigation and were thus procedurally barred under Rule 13(a)
  • O’Brien submitted fake legal citations, failed to acknowledge the issue candidly, violated local rules, and engaged in a pattern of procedural misconduct in this and other related litigation. While monetary sanctions were not imposed, the court granted the motion to strike and ordered dismissal with prejudice as both substantive and disciplinary remedy.

Key Judicial Reasoning

Judge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers.

Al-Hamim v. Star Hearthstone Colorado (USA) 26 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (8)
No Sanction (due to pro se, contrition, etc.); Warning of future sanctions.

AI Use

Alim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court.

Hallucination Details

The appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause.

Ruling/Sanction

Al-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions.

Key Judicial Reasoning

Factors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status.

Duarte v. City of Richmond British Columbia Human Rights Tribunal (Canada) 18 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning

Nathan Duarte, a pro se litigant, filed a complaint against the City of Richmond alleging discrimination based on political beliefs. During the proceedings, Duarte cited three cases to support his claim that union affiliation is a protected characteristic. However, neither the City nor the Tribunal could locate these cases, leading to the suspicion that they were fabricated, possibly by a generative AI tool. The court held:

"While it is not necessary for me to determine if Mr. Duarte intended to mislead the Tribunal, I cannot rely on these “authorities” he cites in his submission. At the very least, Mr. Duarte has not followed the Tribunal’s Practice Direction for Legal Authorities, which requires parties, if possible, to provide a neutral citation so other participants can access a copy of the authority without cost. Still, I am compelled to issue a caution to parties who engage the assistance of generative AI technology while preparing submissions to the Tribunal, in case that is what occurred here. AI tools may have benefits. However, such applications have been known to create information, including case law, which is not derived from real or legitimate sources. It is therefore incumbent on those using AI tools to critically assess the information that it produces, including verifying the case citations for accuracy using legitimate sources. Failure to do so can have serious consequences. For lawyers, such errors have led to disciplinary action by the Law Society: see for example, Zhang v Chen, 2024 BCSC 285. Deliberate attempts to mislead the Tribunal, or even careless submission of fabricated information, could also form the basis for an award of costs under s. 37(4) of the Code. The integrity of the Tribunal’s process, and the justice system more broadly, requires parties to exercise diligence in ensuring that their engagement with artificial intelligence does not supersede their own judgement and credibility."

Letts v. Avidien Technologies E.D. N. Carolina (USA) 16 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (2)
Warning
Mojtabavi v. Blinken C.D. California (USA) 12 December 2024 Pro Se Litigant Unidentified Multiple fake cases Case dismissed with prejudice
John Coulsto et al. v Elliott The High Court (Ireland) 10 December 2024 Pro Se Litigant implied
Outdated Advice Repealed Law (1)
Court rejected the submission as fallacious

Defendants' written submissions (not argued at trial) advanced that s.19 of the Conveyancing Act 1881 had been repealed by the 2009 Act, undermining the power to appoint a receiver. The court found the argument fallacious, noted s.19 was reinstated by the 2013 Act, and observed the submissions were likely produced by a generative AI or an unqualified adviser.

Crypto Open Patent Alliance v. Wright (1) High Court (UK) 6 December 2024 Pro Se Litigant Unknown
Fabricated Case Law (1), Exhibits or Submissions (1)
False Quotes Case Law (1)
Misrepresented Case Law (1), Exhibits or Submissions (1)
No formal sanction; fabricated citations disregarded

AI Use

Dr. Wright, representing himself, submitted numerous case citations in support of an application for remote attendance at an upcoming contempt hearing. COPA demonstrated that most of the authorities cited did not contain the quoted language—or were entirely unrelated. The judge agreed, noting these were likely "AI hallucinations by ChatGPT."

Later on, the Court of Appeal declined permission to appeal (finding that "Dr Wright’s grounds of appeal, skeleton argument and summary of skeleton argument themselves contain multiple falsehoods, including reliance upon fictitious authorities such as “Anderson v the Queen [2013] UKPC 2” which appear to be AI-generated hallucinations"). This led the Court to order him to pay costs of 100,000 GBP.

Carlos E. Gutierrez v. In Re Noemi D. Gutierrez Fl. 3rd District CA (USA) 4 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
False Quotes Case Law (1)
Appeals dismissed as sanction; Appellant barred from future pro se filings in related probate matters without attorney signature

AI Use

The court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.”

Hallucination Details

The court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated.

Ruling/Sanction

Dismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings

Key Judicial Reasoning

The Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations.

Rubio v. District of Columbia DHS D.C. DC (USA) 3 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (4)
Misrepresented Case Law (1)
Complaint dismissed with prejudice; no Rule 11 sanctions imposed, but clear judicial warning on AI misuse and citation verification duties

AI Use

Plaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).”

Hallucination Details

The court could not locate the following cited cases:

  • Ford v. District of Columbia, 70 F.3d 231 (D.C. Cir. 1995)
  • Davis v. District of Columbia, 817 A.2d 1234 (D.C. 2003)
  • Ward v. District of Columbia, 818 A.2d 27 (D.C. 2003)
  • Reese v. District of Columbia, 37 A.3d 232 (D.C. 2012)

These were used to allege a pattern of constitutional violations by the District but were found to be fabricated.

Ruling/Sanction

The court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse.

Key Judicial Reasoning

The Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice.

Leslie v. IQ Data International N.D. Georgia (USA) 24 November 2024 Pro Se Litigant Implied Citation to nonexistent authorities Background action dismissed with prejudice, but no monetary sanction
Wikeley v Kea Investments Ltd (New Zealand) 21 November 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
Referred to guidance about AI