AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (594 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Hanna v Flinders University South Australia (Australia) 29 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Olsen v Finansiel Stabilitet High Court (UK) 25 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Exhibits or Submissions (2), Legal Norm (2)
No contempt, but might bear out on costs
Body by Michael Pty Ltd and Industry Innovation and Science Australia Administrative Review Tribunal (Australia) 24 January 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
False Quotes Doctrinal Work (1)
Misrepresented Legal Norm (4)
Fake references withdrawn before the hearing

"Nevertheless, due to that withdrawal being requested prior to the hearing, I have not considered those paragraphs, these reasons for decision do not take account of those paragraphs and I merely make some general comments below applicable to all parties that appear before the Tribunal.

The use of Chat GPT is problematic for the Tribunal. It perhaps goes without saying that it is not acceptable for a party to attempt to mislead the Tribunal by citing case law that is non-existent or citing legal conclusions that do not follow, whether that attempt is deliberate or otherwise. All parties should be aware that the Tribunal checks and considers all cases and conclusions referred to in both parties’ submissions in any event. This matter would have inevitably been discovered, and adverse inferences may have been drawn. To ensure no such adverse inferences are drawn, parties are encouraged to use publicly available databases to search for case law and not to seek to rely on artificial intelligence."

Candice Dias v Angle Auto Finance Fair Work Commission (Australia) 20 January 2025 Pro Se Litigant Implied
Fabricated Case Law (3)
Misrepresented Case Law (1)
Strong v. Rushmore Loan Management Services D. Nebraska (USA) 15 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1)
Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions
O’Brien v. Flick and Chamberlain S.D. Florida (USA) 10 January 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations

AI Use

Although O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.”

Ruling/Sanction

The court dismissed the case with prejudice on dual grounds:

  • The claims should have been raised as compulsory counterclaims in prior pending litigation and were thus procedurally barred under Rule 13(a)
  • O’Brien submitted fake legal citations, failed to acknowledge the issue candidly, violated local rules, and engaged in a pattern of procedural misconduct in this and other related litigation. While monetary sanctions were not imposed, the court granted the motion to strike and ordered dismissal with prejudice as both substantive and disciplinary remedy.

Key Judicial Reasoning

Judge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers.

Al-Hamim v. Star Hearthstone Colorado (USA) 26 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (8)
No Sanction (due to pro se, contrition, etc.); Warning of future sanctions.

AI Use

Alim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court.

Hallucination Details

The appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause.

Ruling/Sanction

Al-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions.

Key Judicial Reasoning

Factors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status.

Duarte v. City of Richmond British Columbia Human Rights Tribunal (Canada) 18 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning

Nathan Duarte, a pro se litigant, filed a complaint against the City of Richmond alleging discrimination based on political beliefs. During the proceedings, Duarte cited three cases to support his claim that union affiliation is a protected characteristic. However, neither the City nor the Tribunal could locate these cases, leading to the suspicion that they were fabricated, possibly by a generative AI tool. The court held:

"While it is not necessary for me to determine if Mr. Duarte intended to mislead the Tribunal, I cannot rely on these “authorities” he cites in his submission. At the very least, Mr. Duarte has not followed the Tribunal’s Practice Direction for Legal Authorities, which requires parties, if possible, to provide a neutral citation so other participants can access a copy of the authority without cost. Still, I am compelled to issue a caution to parties who engage the assistance of generative AI technology while preparing submissions to the Tribunal, in case that is what occurred here. AI tools may have benefits. However, such applications have been known to create information, including case law, which is not derived from real or legitimate sources. It is therefore incumbent on those using AI tools to critically assess the information that it produces, including verifying the case citations for accuracy using legitimate sources. Failure to do so can have serious consequences. For lawyers, such errors have led to disciplinary action by the Law Society: see for example, Zhang v Chen, 2024 BCSC 285. Deliberate attempts to mislead the Tribunal, or even careless submission of fabricated information, could also form the basis for an award of costs under s. 37(4) of the Code. The integrity of the Tribunal’s process, and the justice system more broadly, requires parties to exercise diligence in ensuring that their engagement with artificial intelligence does not supersede their own judgement and credibility."

Letts v. Avidien Technologies E.D. N. Carolina (USA) 16 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (2)
Warning
Mojtabavi v. Blinken C.D. California (USA) 12 December 2024 Pro Se Litigant Unidentified Multiple fake cases Case dismissed with prejudice
John Coulsto et al. v Elliott The High Court (Ireland) 10 December 2024 Pro Se Litigant implied
Outdated Advice Repealed Law (1)
Court rejected the submission as fallacious

Defendants' written submissions (not argued at trial) advanced that s.19 of the Conveyancing Act 1881 had been repealed by the 2009 Act, undermining the power to appoint a receiver. The court found the argument fallacious, noted s.19 was reinstated by the 2013 Act, and observed the submissions were likely produced by a generative AI or an unqualified adviser.

Crypto Open Patent Alliance v. Wright (1) High Court (UK) 6 December 2024 Pro Se Litigant Unknown
Fabricated Case Law (1), Exhibits or Submissions (1)
False Quotes Case Law (1)
Misrepresented Case Law (1), Exhibits or Submissions (1)
No formal sanction; fabricated citations disregarded

AI Use

Dr. Wright, representing himself, submitted numerous case citations in support of an application for remote attendance at an upcoming contempt hearing. COPA demonstrated that most of the authorities cited did not contain the quoted language—or were entirely unrelated. The judge agreed, noting these were likely "AI hallucinations by ChatGPT."

Later on, the Court of Appeal declined permission to appeal (finding that "Dr Wright’s grounds of appeal, skeleton argument and summary of skeleton argument themselves contain multiple falsehoods, including reliance upon fictitious authorities such as “Anderson v the Queen [2013] UKPC 2” which appear to be AI-generated hallucinations"). This led the Court to order him to pay costs of 100,000 GBP.

Carlos E. Gutierrez v. In Re Noemi D. Gutierrez Fl. 3rd District CA (USA) 4 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
False Quotes Case Law (1)
Appeals dismissed as sanction; Appellant barred from future pro se filings in related probate matters without attorney signature

AI Use

The court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.”

Hallucination Details

The court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated.

Ruling/Sanction

Dismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings

Key Judicial Reasoning

The Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations.

Rubio v. District of Columbia DHS D.C. DC (USA) 3 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (4)
Misrepresented Case Law (1)
Complaint dismissed with prejudice; no Rule 11 sanctions imposed, but clear judicial warning on AI misuse and citation verification duties

AI Use

Plaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).”

Hallucination Details

The court could not locate the following cited cases:

  • Ford v. District of Columbia, 70 F.3d 231 (D.C. Cir. 1995)
  • Davis v. District of Columbia, 817 A.2d 1234 (D.C. 2003)
  • Ward v. District of Columbia, 818 A.2d 27 (D.C. 2003)
  • Reese v. District of Columbia, 37 A.3d 232 (D.C. 2012)

These were used to allege a pattern of constitutional violations by the District but were found to be fabricated.

Ruling/Sanction

The court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse.

Key Judicial Reasoning

The Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice.

Leslie v. IQ Data International N.D. Georgia (USA) 24 November 2024 Pro Se Litigant Implied Citation to nonexistent authorities Background action dismissed with prejudice, but no monetary sanction
Wikeley v Kea Investments Ltd (New Zealand) 21 November 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
Referred to guidance about AI
Kaur v RMIT SC Victoria (CA) (Australia) 11 November 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Vargas v. Salazar S.D. Texas (USA) 1 November 2024 Pro Se Litigant Implied Fake citations Plaintiff ordered to refile submissions without fake citations
Jones v. Simploy Missouri CA (USA) 24 September 2024 Pro Se Litigant Implied Fake citations Warning

The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court.

We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54.

We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity."

Martin v. Hawai D. Hawaii (USA) 20 September 2024 Pro Se Litigant Unidentified
Fabricated Case Law (2)
False Quotes Case Law (2)
Misrepresented Legal Norm (2)
Warning, and Order to file further submissions with Declaration
Transamerica Life v. Williams D. Arizona (USA) 6 September 2024 Pro Se Litigant Implied
Fabricated Case Law (4)
Misrepresented Legal Norm (1)
Warning
Rule v. Braiman N.D. New York (USA) 4 September 2024 Pro Se Litigant Implied Fake citations Warning
N.E.W. Credit Union v. Mehlhorn Wisconsin C.A. (USA) 13 August 2024 Pro Se Litigant Implied At least four fictitious cases Warning

The court pointed out: "In its brief, the Credit Union points out that the cases cited by Mehlhorn do not exist and speculates that Mehlhorn used an artificial intelligence program to draft her brief-in-chief. In her reply brief, Mehlhorn does not respond to this assertion. Instead, she cites eight new cases, none of which were referenced in her brief-in-chief. It appears, however, that four of those cases are also fictitious. At a minimum, this court cannot locate those cases using the citations provided.

We strongly admonish Mehlhorn for her violations of the Rules of Appellate procedure, and particularly for her citations to what appear to be fictitious cases. Although Mehlhorn is self-represented, pro se appellants “are bound by the same rules that apply to attorneys on appeal.” See Waushara County v. Graf, 166 Wis. 2d 442, 452, 480 N.W.2d 16 (1992). We could summarily dismiss this appeal as a sanction for Mehlhorn’s multiple and egregious rule violations. See WIS. STAT. RULE 809.83(2). Nevertheless, we choose to address the merits of Mehlhorn’s arguments as best as we are able, given the deficiencies in her briefing"

Mr D Rollo v. Marstons Trading Ltd Employment Tribunal (UK) 1 August 2024 Pro Se Litigant ChatGPT
Misrepresented Legal Norm (1)
Claim dismissed; AI material excluded from evidence under prior judicial order; no sanction but explicit judicial criticism

AI Use

The claimant sought to rely on a conversation with ChatGPT to show that the respondent’s claims about the difficulty of retrieving archived data were false.

Ruling/Sanction

No formal sanction was imposed, but the judgment made clear that ChatGPT outputs are not acceptable as evidence.

Key Judicial Reasoning

The Tribunal held that "a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable".

Dukuray v. Experian Information Solutions S.D.N.Y. (USA) 26 July 2024 Pro Se Litigant Unidentified
Fabricated Case Law (3), Legal Norm (2)
No sanction; Formal Warning Issued

AI Use

Plaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation.

Hallucination Details

Three nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues.

Ruling/Sanction

The court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks.

Key Judicial Reasoning

Reliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused.

Joe W. Byrd v. Woodland Springs HA Texas CA (USA) 25 July 2024 Pro Se Litigant Unidentified Several garbled or misattributed case citations and vague legal references No formal sanction

AI Use

The court does not confirm AI use but references a legal article about the dangers of ChatGPT and states: “We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.”

Ruling/Sanction

The court affirmed the trial court’s judgment, found no preserved or adequately briefed grounds for appeal, and declined to address the vague or unsupported references. No explicit sanction or costs were imposed for the apparent AI-related deficiencies.

Anonymous v. NYC Department of Education S.D.N.Y. (USA) 18 July 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
No sanction; Formal Warning Issued

AI Use

The plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI.

Hallucination Details

Several fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious.

Ruling/Sanction

No sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency.

Key Judicial Reasoning

The court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct.

Lakaev v McConkey Supreme Court of Tasmania (Australia) 12 July 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1)
Appeal dismissed for want of prosecution

The appellant's submissions included a misleading reference to a High Court case, De L v Director-General, NSW Department of Community Services, misrepresenting its relevance to false testimony, which was not the case's subject matter, and a fabricated reference to Hewitt v Omari [2015] NSWCA 175, which does not exist. The appeal was dismissed, considering the lack of progress and potential prejudice to the respondent.

Zeng v. Chell S.D. New York (USA) 9 July 2024 Pro Se Litigant Implied Fabricated citations Warning
Dowlah v. Professional Staff Congress NY SC (USA) 30 May 2024 Pro Se Litigant Unidentified Several non-existent cases Caution to plaintiff
Robert Lafayette v. Blueprint Basketball et al Vermont SC (USA) 26 April 2024 Pro Se Litigant Implied
Fabricated Case Law (2)
Order to Show Cause
Michael Cohen Matter SDNY (USA) 20 March 2024 Pro Se Litigant Google Bard 3 fake cases No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied

AI Use

Michael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases.

Hallucination Details

Cohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility.

Ruling/Sanction

Judge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits.

Key Judicial Reasoning

The court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification.

Martin v. Taylor County N.D. Texas (USA) 6 March 2024 Pro Se Litigant Implied
False Quotes Case Law (1)
Misrepresented Legal Norm (9)
Warning

In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time."

Finch v The Heat Group Family Court (Australia) 27 February 2024 Pro Se Litigant Implied
Fabricated Case Law (2)
Misrepresented Case Law (1)

Applicant (unrepresented) provided a list of 24 authorities claimed to show instances where MinterEllison had been restrained. Court's associate and judge found the list contained fabricated or misdescribed citations; judge characterised the provision of those authorities as an egregious instance of misleading the court but did not impose professional sanctions. Restraint application dismissed on merits.

Kruse v. Karlen Miss. CA (USA) 13 February 2024 Pro Se Litigant Unidentified At least twenty-two fabricated case citations and multiple statutory misstatements. Dismissal of Appeal + Damages Awarded for Frivolous Appeal. 10000 USD

AI Use

Appellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission.

Hallucination Details

Out of twenty-four total case citations in Karlen’s appellate brief:

  • Only two were genuine (and misused).
  • Twenty-two were completely fictitious.
  • Multiple Missouri statutes and procedural rules were cited incorrectly or completely misrepresented

Ruling/Sanction

The Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status.

Key Judicial Reasoning

The Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers.

Harber v. HMRC (UK) 4 December 2023 Pro Se Litigant Unidentified 9 Fake Tribunal Decisions No Sanction on Litigant; Warning implied for lawyers.

AI Use

Catherine Harber, a self-represented taxpayer appealing an HMRC penalty, submitted a document citing nine purported First-Tier Tribunal decisions supporting her position regarding "reasonable excuse". She stated the cases were provided by "a friend in a solicitor's office" and acknowledged they might have been generated by AI. ChatGPT was mentioned as a likely source.

Hallucination Details

The nine cited FTT decisions (names, dates, summaries provided) were found to be non-existent after checks by the Tribunal and HMRC. While plausible, the fake summaries contained anomalies like American spellings and repeated phrases. Some cited cases resembled real ones, but those real cases actually went against the appellant.

Ruling/Sanction

The Tribunal factually determined the cited cases were AI-generated hallucinations. It accepted Mrs. Harber was unaware they were fake and did not know how to verify them. Her appeal failed on its merits, unrelated to the AI issue. No sanctions were imposed on the litigant.

Key Judicial Reasoning

The Tribunal emphasized that submitting invented judgments was not harmless, citing the waste of public resources (time and money for the Tribunal and HMRC). It explicitly endorsed the concerns raised in the US Mata decision regarding the various harms flowing from fake opinions. While lenient towards the self-represented litigant, the ruling implicitly warned that lawyers would likely face stricter consequences. This was the first reported UK decision finding AI-generated fake cases cited by a litigant

Whaley v. Experian Information Solutions S.D. Ohio (USA) 16 November 2023 Pro Se Litigant Unidentified Pleadings full of irrelevant info Warning
Mescall v. Renaissance at Antiquity Westner N.C. (USA) 13 November 2023 Pro Se Litigant Unidentified Unspecified concerns about AI-generated inaccuracies No sanction; Warning and Leave to Amend Granted

AI Use

Defendants alleged that portions of Plaintiff’s response to a motion to dismiss were AI-generated.

Hallucination Details

No specific fabricated cases or fake quotations were identified. The concern was broader: incoherent and procedurally improper pleadings, compounded by apparent AI usage, which raised ethical red flags.

Ruling/Sanction

Rather than imposing sanctions, the court granted the pro se plaintiff leave to amend the complaint. Plaintiff was warned to comply with procedural rules and to submit a coherent, consolidated amended complaint, or face dismissal.

Key Judicial Reasoning

The judge stressed that AI use does not absolve pro se litigants of procedural compliance. Litigants must ensure pleadings are coherent, concise, and legally grounded, regardless of technological tools used. Courts cannot act as de facto advocates or reconstruct fragmented pleadings.

Morgan v. Community Against Violence New Mexico (USA) 23 October 2023 Pro Se Litigant Unidentified Fake Case Citations Partial Dismissal + Judicial Warning

AI Use

Plaintiff, acting without counsel, submitted briefing that included multiple fabricated or erroneous judicial citations, likely generated by an AI tool used for research or drafting. While the tool itself is not named, the nature and clustering of errors mirror known AI output patterns.

Hallucination Details

Cited cases included wholly nonexistent opinions (e.g., "Las Cruces Sun-News v. City of Las Cruces") and real case names with incorrect volume/reporting details (e.g., misattributed circuits or invented page numbers). The citations lacked verifiable authority and were flagged by the court as spurious upon review.

Ruling/Sanction

The court dismissed several claims on substantive grounds but issued a sharp warning about the misuse of AI-generated legal citations. While no immediate sanctions were imposed, the judge explicitly referenced Mata v. Avianca and held this instance to be only the second federal case addressing AI hallucinations in pleadings. The plaintiff was cautioned that any recurrence would result in Rule 11 sanctions, including dismissal with prejudice.

Key Judicial Reasoning

The opinion stressed that access to courts is not a license to submit fictitious legal materials. Rule 11(b) requires factual and legal support for all filings, and even pro se litigants must adhere to this baseline. The court emphasized judicial efficiency, fairness to the opposing party, and the reputational harm caused by false citations. The misuse of AI was implicitly treated as a form of recklessness or bad faith, not excused by technological ignorance

Source: Volokh
Thomas v. Pangburn S.D. Ga. (USA) 6 October 2023 Pro Se Litigant Unidentified
Fabricated Case Law (1)
False Quotes Case Law (1)
Misrepresented Legal Norm (1)
Dismissal of Case as Sanction for Bad Faith + Judicial Rebuke

AI Use

Jerry Thomas filed pro se pleadings citing at least ten fabricated cases. The citations appeared plausible but did not correspond to any real authorities. Despite opportunities to explain, Thomas gave vague excuses about "self-research" and "assumed reliability," without clarifying the sources - suggesting reliance on AI-generated content.

Hallucination Details

Ten fake case citations systematically inserted across filings

Fabricated authorities mimicked proper citation format but were unverifiable in any recognized database

The pattern mirrored known AI hallucination behaviors: fabricated authorities presented with apparent legitimacy

Ruling/Sanction

The Court dismissed the action with prejudice as a Rule 11 sanction. It emphasized that fake citations delay litigation, waste judicial resources, and erode public confidence. The Court explicitly invoked Mata v. Avianca for the broader dangers of AI hallucinations in litigation and found Thomas acted in bad faith by failing to properly explain the origin of the fabrications.

Key Judicial Reasoning

Citing fabricated cases (even if resulting from AI use or negligence) is sanctionable because it constitutes an improper purpose under Rule 11. Sanctions were deemed necessary to deter further abuses, with dismissal considered more appropriate than monetary penalties given the circumstances.

Ruggierlo et al. v. Lancaster E.D. Mich. (USA) 11 September 2023 Pro Se Litigant Unidentified
Fabricated Case Law (3)
No sanction; Formal Judicial Warning

AI Use

Lancaster, filing objections to a magistrate judge’s Report and Recommendation, cited several fabricated case authorities. The Court noted the possibility of reliance on a generative AI tool and explicitly warned Lancaster about future misconduct.

Hallucination Details

Fabricated or mutant citations, including:

  • Bazzi v. Sentinel Ins. Co., 961 F.3d 734 (6th Cir. 2020) — mutant citation blending two unrelated real cases
  • Maldonado v. Ford Motor Co., 720 F.3d 760 (5th Cir. 2013) — nonexistent
  • Malliaras & Poulos, P.C. v. City of Center Line, 788 F.3d 876 (6th Cir. 2015) — nonexistent

Court highlighted that the majority of the cited cases in Lancaster’s objections were fake.

Ruling/Sanction

No immediate sanction imposed due to pro se status and lack of prior warnings. However, the Court issued a pointed warning that citing "made-up law" could lead to significant sanctions, either in that Court or any other court to which the case might be remanded.

Key Judicial Reasoning

The Court emphasized that unverified, fabricated legal citations undermine the judicial process and waste both judicial and litigant resources. Even without clear evidence of malicious intent, negligence in checking citations is sanctionable. Rule 11 duties apply fully to pro se litigants.

Scott v. Federal National Mortgage Association Maine County (USA) 14 June 2023 Pro Se Litigant Unidentified
Fabricated Case Law (2)
Misrepresented Exhibits or Submissions (1)
Dismissal of Complaint + Sanctions (Attorney's Fees and Costs)
Unknown case Manchester (UK) 29 May 2023 Pro Se Litigant Implied Fabricated citations, misrepresented precedents

Unclear if any formal decision on the matter, the incident was reported in the Law Society Gazette in May 2023 (source).

Nash v. Director of Public Prosecutions Supreme Court of Western Australia - Court of Appeal (Australia) 8 May 2023 Pro Se Litigant Implied Fictitious authorities Appeal dismissed

"Mr Nash is unrepresented. He prepared the appellant's case himself, although it appears that he may have had some assistance with later submissions (including, perhaps, from an artificial intelligence program such as Chat GPT). Neither form of submission made coherent submissions as to why the trial judge's decision was affected by material error or otherwise gave rise to a miscarriage of justice. Nor did the material sought to be adduced by Mr Nash as additional evidence on the appeal disclose any miscarriage of justice.

[...]. There is otherwise no jurisdictional basis to transfer criminal proceedings under State law in this Court to the court of another State. The authorities cited by Mr Nash in support of such jurisdiction do not exist; they are fictitious."

The court dismissed the appeal, finding no merit in the grounds presented, and refused to admit additional evidence. No professional sanctions or monetary penalties were imposed as Nash was a pro se litigant.

Source: Jay Iyer