AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (914 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Mojtabavi v. Blinken C.D. California (USA) 12 December 2024 Pro Se Litigant Unidentified Multiple fake cases Case dismissed with prejudice
John Coulsto et al. v Elliott The High Court (Ireland) 10 December 2024 Pro Se Litigant implied
Outdated Advice Repealed Law (1)
Court rejected the submission as fallacious

Defendants' written submissions (not argued at trial) advanced that s.19 of the Conveyancing Act 1881 had been repealed by the 2009 Act, undermining the power to appoint a receiver. The court found the argument fallacious, noted s.19 was reinstated by the 2013 Act, and observed the submissions were likely produced by a generative AI or an unqualified adviser.

Crypto Open Patent Alliance v. Wright (1) High Court (UK) 6 December 2024 Pro Se Litigant Unknown
Fabricated Case Law (1), Exhibits or Submissions (1)
False Quotes Case Law (1)
Misrepresented Case Law (1), Exhibits or Submissions (1)
No formal sanction; fabricated citations disregarded

AI Use

Dr. Wright, representing himself, submitted numerous case citations in support of an application for remote attendance at an upcoming contempt hearing. COPA demonstrated that most of the authorities cited did not contain the quoted language—or were entirely unrelated. The judge agreed, noting these were likely "AI hallucinations by ChatGPT."

Later on, the Court of Appeal declined permission to appeal (finding that "Dr Wright’s grounds of appeal, skeleton argument and summary of skeleton argument themselves contain multiple falsehoods, including reliance upon fictitious authorities such as “Anderson v the Queen [2013] UKPC 2” which appear to be AI-generated hallucinations"). This led the Court to order him to pay costs of 100,000 GBP.

Carlos E. Gutierrez v. In Re Noemi D. Gutierrez Fl. 3rd District CA (USA) 4 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
False Quotes Case Law (1)
Appeals dismissed as sanction; Appellant barred from future pro se filings in related probate matters without attorney signature

AI Use

The court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.”

Hallucination Details

The court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated.

Ruling/Sanction

Dismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings

Key Judicial Reasoning

The Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations.

Rubio v. District of Columbia DHS D.C. DC (USA) 3 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (4)
Misrepresented Case Law (1)
Complaint dismissed with prejudice; no Rule 11 sanctions imposed, but clear judicial warning on AI misuse and citation verification duties

AI Use

Plaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).”

Hallucination Details

The court could not locate the following cited cases:

  • Ford v. District of Columbia, 70 F.3d 231 (D.C. Cir. 1995)
  • Davis v. District of Columbia, 817 A.2d 1234 (D.C. 2003)
  • Ward v. District of Columbia, 818 A.2d 27 (D.C. 2003)
  • Reese v. District of Columbia, 37 A.3d 232 (D.C. 2012)

These were used to allege a pattern of constitutional violations by the District but were found to be fabricated.

Ruling/Sanction

The court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse.

Key Judicial Reasoning

The Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice.

Gauthier v. Goodyear Tire & Rubber Co. E.D. Tex. (USA) 25 November 2024 Lawyer Claude
Fabricated Case Law (2)
False Quotes Case Law (7)
Monetary fine + Mandatory AI-related CLE Course + Disclosure to Client 2000 USD

AI Use

Monk admitted using the Claude AI tool to draft a summary judgment opposition without adequately verifying the case citations or quotations. He later claimed to have attempted post-hoc verification through Lexis AI but did not correct the errors until after a judicial show cause order.

Hallucination Details

Cited two completely nonexistent cases. Also fabricated quotations attributed to real cases, including Morales v. SimuFlite, White v. FCI USA, Burton v. Freescale, among others. Several "quotes" did not appear anywhere in the cited opinions.

Ruling/Sanction

The court imposed a $2,000 fine, ordered Monk to complete at least one hour of CLE on generative AI in legal practice, and mandated formal disclosure of the sanctions order to his client. It also permitted amendment of the defective filing but warned of the severity of the misconduct.

Key Judicial Reasoning

The court emphasized that attorneys remain personally responsible for the verification of all filings under Rule 11, regardless of technology used. Use of AI does not dilute the duty of candor. Continued silence and failure to rectify errors after opposing counsel flagged them exacerbated the misconduct.

Leslie v. IQ Data International N.D. Georgia (USA) 24 November 2024 Pro Se Litigant Implied Citation to nonexistent authorities Background action dismissed with prejudice, but no monetary sanction
Wikeley v Kea Investments Ltd (New Zealand) 21 November 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
Referred to guidance about AI
Monster Energy Company v. Pacific Smoke International Inc. Canadian Intellectual Property Office (Canada) 20 November 2024 Lawyer
Fabricated Case Law (1)
The fabricated citation was disregarded by the court.

In a trademark opposition case between Monster Energy Company and Pacific Smoke International Inc., the Applicant, Pacific Smoke, cited a non-existent case, 'Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7', in support of its argument. This was identified as an AI hallucination by the court. The court disregarded this citation and reminded the Applicant of the seriousness of relying on false citations, whether accidental or AI-generated.

Berry v. Stewart D. Kansas (USA) 14 November 2024 Lawyer Unidentified
Fabricated Case Law (1), Exhibits or Submissions (1)
At hearing, Counsel pledged to reimburse other side and his client

In the November 2024 Show Cause Order, Judge Robinson noted that: "First, the briefing does not cite the forum-selection clause from the contract between the parties; instead, it cites and quotes a forum-selection clause that appears nowhere in the papers submitted by the parties. Second, Defendant’s reply brief includes a citation, Hogan v. Allstate Insurance Co., No. 19-CV-00262-JPM, 2020 WL 1882334 (D. Kan. Apr. 15, 2020), in which the court purportedly “transferred a case to the Southern District of Texas because the majority of the witnesses were located in Texas. The court found that the burden on the witnesses outweighed the convenience of litigating the case in Kansas.” As far as the Court can tell, this case does not exist. The Westlaw database number pulls up no case; the Court has found no case in CM/ECF between the parties “Hogan” and “Allstate Insurance Co.” Moreover, docket numbers in this district have at least four digits—not three—after the case-type designation, and there is no judge in this district with the initials “JPM.”"

During the show cause hearing (Transcript), Counsel apologised and pledged to reimburse the other side's costs, as well as his client's.

Kaur v RMIT SC Victoria (CA) (Australia) 11 November 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Vargas v. Salazar S.D. Texas (USA) 1 November 2024 Pro Se Litigant Implied Fake citations Plaintiff ordered to refile submissions without fake citations
Churchill Funding v. 732 Indiana SC Cal (USA) 31 October 2024 Lawyer Implied
Fabricated Case Law (1)
Misrepresented Case Law (1), Legal Norm (1)
Order to show cause
Source: Volokh
Mortazavi v. Booz Allen Hamilton, Inc. C.D. Cal. (USA) 30 October 2024 Lawyer Unidentified
Fabricated Case Law (1)
False Quotes Exhibits or Submissions (1)
$2,500 Monetary Sanction + Mandatory Disclosure to California State Bar

AI Use

Plaintiff’s counsel admitted using generative AI to draft a motion to remand without independently verifying the legal citations or the factual accuracy of quoted complaint allegations.

Hallucination Details

Cited a fabricated case (details of the specific case name not listed in the ruling). Included fabricated quotations from the complaint, suggesting nonexistent factual allegations.

Ruling/Sanction

The Court imposed a $2,500 sanction payable by December 30, 2024. Counsel was also required to notify the California State Bar of the sanction and file proof of notification and payment. The Court recognized mitigating factors (health issues, post-hoc corrective measures) but stressed the seriousness of the violations.

Key Judicial Reasoning

Rule 11 requires attorneys to conduct a reasonable inquiry into both facts and law. Use of AI does not diminish this duty. Subjective good faith is irrelevant: violations occur even without intent to deceive. AI-generated filings must be reviewed with the same rigor as traditional submissions.

Thomas v. Commissioner of Internal Revenue United States Tax Court (USA) 23 October 2024 Lawyer, Paralegal Implied
Misrepresented Case Law (3)
Pretrial Memorandum stricken

The lawyer for the petitioner admitted to not reviewing the memorandum, which was prepared by a paralegal. The court deemed the Pretrial Memorandum stricken but did not impose a monetary penalty, considering the economic situation of the petitioner and the lawyer's service to a client who might otherwise be unrepresented. It was also pertinent that the law being stated was accurate (even if the citations were wrong).

Matter of Weber NY County Court (USA) 10 October 2024 Expert MS Copilot Unverifiable AI Calculation Process AI-assisted Evidence Inadmissible; Affirmative Duty to Disclose AI Use for Evidence Established.

AI Use

In a trust accounting proceeding, the objectant's damages expert testified that he used Microsoft Copilot (described as an AI chatbot) to cross-check his damages calculations presented in a supplemental report.

Hallucination Details

The issue wasn't fabricated citations, but the reliability and verifiability of the AI's calculation process. The expert could not recall the specific prompts used, nor could he explain Copilot's underlying sources or methodology. He claimed using AI tools was generally accepted in his field but offered no proof.

Ruling/Sanction

The court had already found the expert's analysis unreliable on other grounds, but specifically addressed the AI use. The court attempted to replicate the expert's results using Copilot itself, obtaining different outputs and eliciting warnings from Copilot about the need for expert verification before court use. The court held, potentially as an issue of first impression in that court, that counsel has an affirmative duty to disclose the use of AI in generating evidence prior to its introduction, due to AI's rapid evolution and reliability issues. AI-generated evidence would be subject to a Frye hearing (standard for admissibility of scientific evidence in NY). The expert's AI-assisted calculations were deemed inadmissible.

Key Judicial Reasoning

The court emphasized the "garbage in, garbage out" principle, stressing the need for users to understand AI inputs and processes. It stated that the mere fact AI is used does not make its output admissible; reliability must be established. The lack of transparency regarding the AI's process was a key factor in finding the evidence unreliable.

Iovino v. Michael Stapleton Associates, Ltd. Western Virginia (USA) 10 October 2024 Lawyer Claude, Westlaw, LexisNexis
Fabricated Case Law (2)
False Quotes Case Law (2)
Misrepresented Case Law (1)
No sanction, but hearing transcript sent to bar authorities

Show cause order is here. Counsel responded to Show Cause order in this document. Show cause hearing transcript is here.

Jones v. Simploy Missouri CA (USA) 24 September 2024 Pro Se Litigant Implied Fake citations Warning

The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court.

We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54.

We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity."

Martin v. Hawai D. Hawaii (USA) 20 September 2024 Pro Se Litigant Unidentified
Fabricated Case Law (2)
False Quotes Case Law (2)
Misrepresented Legal Norm (2)
Warning, and Order to file further submissions with Declaration
Anonymous Spanish Lawyer Tribunal Constitucional (Spain) 9 September 2024 Lawyer Unidentified 19 fabricated Constitutional Court decisions Formal Reprimand (Apercibimiento) + Referral to Barcelona Bar for Disciplinary Action

AI Use

The Court noted that the false citations could stem from AI, disorganized database use, or invention. Counsel claimed a database error but provided no evidence. The Court found the origin irrelevant: verification duty lies with the submitting lawyer.

Hallucination Details

Nineteen separate fabricated citations to fictional Constitutional Court judgments. Fake quotations falsely attributed to those nonexistent decisions. Cited to falsely bolster claims of constitutional relevance in an amparo.

Ruling/Sanction

The Constitutional Court unanimously found that the inclusion of nineteen fabricated citations constituted a breach of the respect owed to the Court and its judges under Article 553.1 of the Spanish Organic Law of the Judiciary. Issued a formal warning (apercibimiento) rather than a fine due to absence of prior offenses. Referred the matter to the Barcelona Bar for possible disciplinary proceedings

Key Judicial Reasoning

The Court stressed that even absent express insults, fabricating authority gravely disrespects the judiciary’s function. Irrespective of whether AI was used or a database error occurred, the professional duty of diligent verification was breached. The Court noted that fake citations disrupt the court’s work both procedurally and institutionally.

Transamerica Life v. Williams D. Arizona (USA) 6 September 2024 Pro Se Litigant Implied
Fabricated Case Law (4)
Misrepresented Legal Norm (1)
Warning
Rule v. Braiman N.D. New York (USA) 4 September 2024 Pro Se Litigant Implied Fake citations Warning
ATSJ NA 38/2024 TSJ Navarra (Spain) 4 September 2024 Lawyer CHATGPT 3
Fabricated Legal Norm (1)
USA v. Michel D.C. (USA) 30 August 2024 Lawyer EyeLevel
False Quotes Exhibits or Submissions (1)
Misattribution was irrelevant

As acknowledged by Counsel, he also used AI to generate parts of his pleadings.

In re Dayal (Australia) 27 August 2024 Lawyer LEAP
Fabricated Case Law (1)
Referral to the Victorian Legal Services Board and Commissioner for potential disciplinary review; no punitive order issued by the court itself; apology accepted.

Counsel admitted the list of authorities and accompanying summaries were generated by an AI research module embedded in his legal practice software. He stated he did not verify the content before submitting it. The judge found that neither Counsel nor any other legal practitioner at his firm had checked the validity of the generated output.

The court accepted Counsel’s unconditional apology, noted remedial steps, and acknowledged his cooperation and candour. However, it nonetheless referred the matter to the Office of the Victorian Legal Services Board and Commissioner under s 30 of the Legal Profession Uniform Law Application Act 2014 (Vic) for independent assessment. The referral was explicitly framed as non-punitive and in the public interest.

In September 2025, the Board sanctioned Counsel, preventing him from acting as a principal lawyer or operate his own practice, and put him down for two years of supervision (see here).

Rasmussen v. Rasmussen California (USA) 23 August 2024 Lawyer Implied
Fabricated Case Law (4)
Misrepresented Case Law (4)
Lawyer ordered to show cause why she should not be referred to the bar

While the Court initially organised show cause proceedings leading to potential sanctions, the case was eventually settled. Nevertheless, the Court stated that it "intends to report Ms. Rasmussen’s use of mis-cited and nonexistent cases in the demurrer to the State Bar", unless she objected to "this tentative ruling".

N.E.W. Credit Union v. Mehlhorn Wisconsin C.A. (USA) 13 August 2024 Pro Se Litigant Implied At least four fictitious cases Warning

The court pointed out: "In its brief, the Credit Union points out that the cases cited by Mehlhorn do not exist and speculates that Mehlhorn used an artificial intelligence program to draft her brief-in-chief. In her reply brief, Mehlhorn does not respond to this assertion. Instead, she cites eight new cases, none of which were referenced in her brief-in-chief. It appears, however, that four of those cases are also fictitious. At a minimum, this court cannot locate those cases using the citations provided.

We strongly admonish Mehlhorn for her violations of the Rules of Appellate procedure, and particularly for her citations to what appear to be fictitious cases. Although Mehlhorn is self-represented, pro se appellants “are bound by the same rules that apply to attorneys on appeal.” See Waushara County v. Graf, 166 Wis. 2d 442, 452, 480 N.W.2d 16 (1992). We could summarily dismiss this appeal as a sanction for Mehlhorn’s multiple and egregious rule violations. See WIS. STAT. RULE 809.83(2). Nevertheless, we choose to address the merits of Mehlhorn’s arguments as best as we are able, given the deficiencies in her briefing"

Nitzan v. Adar BaEmakim Properties Ltd. Magistrate Court (Israel) 13 August 2024 Lawyer Implied
Fabricated Case Law (4)
False Quotes Legal Norm (1)
Misrepresented Case Law (5)
Matter referred to the Legal Department of the Court Administration

In response to a motion by the defendant (Adar BaEmakim Properties Ltd.), the plaintiff's counsel submitted a response that included several purported quotations from Israeli Supreme Court decisions to support his arguments.

Judge Daniel Kirs discovered that these citations were problematic: party names did not match case numbers, decision dates were incorrect, and one cited judge was incorrect. Crucially, the quoted text did not appear in the actual decisions, even when counsel was ordered to and did produce copies of the judgments he claimed to have cited.

The judge considered the counsel's conduct to be more severe than simply misattributing a minority opinion; it was the presentation of a series of non-existent Supreme Court rulings. He explicitly noted that Adv. Faris did not claim these were fabrications by an AI tool that he failed to check (unlike the Mata v. Avianca case). Instead, Adv. Faris maintained that he himself had prepared these "summaries" after reading the cases.

Due to the severity of this conduct—presenting fabricated Supreme Court "quotations" and misrepresenting their origin—the judge ordered the matter to be referred to the Legal Department of the Court Administration for consideration of further action.

Separately, the defendant's underlying request (to send clarification questions to a court-appointed expert) was granted. The judge found that the "severe misconduct" of the plaintiff's counsel constituted a "special reason" to allow this, even though the defendant had previously waived the opportunity. The plaintiff was ordered to pay the defendant NIS 600 for legal fees related to this part of the motion.

(Summary by Gemini 2.5)

Source: AI4Law
Industria de Diseño Textil, S.A. v. Sara Ghassai Canadian Intellectual Property Office (Canada) 12 August 2024 Lawyer Implied
Fabricated Case Law (1)
Warning
Mr D Rollo v. Marstons Trading Ltd Employment Tribunal (UK) 1 August 2024 Pro Se Litigant ChatGPT
Misrepresented Legal Norm (1)
Claim dismissed; AI material excluded from evidence under prior judicial order; no sanction but explicit judicial criticism

AI Use

The claimant sought to rely on a conversation with ChatGPT to show that the respondent’s claims about the difficulty of retrieving archived data were false.

Ruling/Sanction

No formal sanction was imposed, but the judgment made clear that ChatGPT outputs are not acceptable as evidence.

Key Judicial Reasoning

The Tribunal held that "a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable".

Dukuray v. Experian Information Solutions, Inc. S.D.N.Y. (USA) 26 July 2024 Pro Se Litigant Unidentified
Fabricated Case Law (3), Legal Norm (2)
No sanction; Formal Warning Issued

AI Use

Plaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation.

Hallucination Details

Three nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues.

Ruling/Sanction

The court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks.

Key Judicial Reasoning

Reliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused.

Joe W. Byrd v. Woodland Springs HA Texas CA (USA) 25 July 2024 Pro Se Litigant Unidentified Several garbled or misattributed case citations and vague legal references No formal sanction

AI Use

The court does not confirm AI use but references a legal article about the dangers of ChatGPT and states: “We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.”

Ruling/Sanction

The court affirmed the trial court’s judgment, found no preserved or adequately briefed grounds for appeal, and declined to address the vague or unsupported references. No explicit sanction or costs were imposed for the apparent AI-related deficiencies.

Anonymous v. NYC Department of Education S.D.N.Y. (USA) 18 July 2024 Pro Se Litigant Unidentified
Fabricated Case Law (1)
No sanction; Formal Warning Issued

AI Use

The plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI.

Hallucination Details

Several fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious.

Ruling/Sanction

No sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency.

Key Judicial Reasoning

The court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct.

Lakaev v McConkey Supreme Court of Tasmania (Australia) 12 July 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1)
Appeal dismissed for want of prosecution

The appellant's submissions included a misleading reference to a High Court case, De L v Director-General, NSW Department of Community Services, misrepresenting its relevance to false testimony, which was not the case's subject matter, and a fabricated reference to Hewitt v Omari [2015] NSWCA 175, which does not exist. The appeal was dismissed, considering the lack of progress and potential prejudice to the respondent.

Zeng v. Chell S.D. New York (USA) 9 July 2024 Pro Se Litigant Implied Fabricated citations Warning
X BV in Z v. Tax Inspector The Hague CA (Netherlands) 26 June 2024 Lawyer ChatGPT
Misrepresented Case Law (1), Exhibits or Submissions (1)
Arguments rejected; No formal sanction but severe judicial criticism.

AI Use

The appellant relied on ChatGPT to generate a list of ten "economically comparable" vehicles for purposes of arguing a lower trade-in value to reduce bpm (car registration tax). The Court noted this explicitly and criticized the mechanical reliance on AI outputs without human verification or contextual adjustment.

Hallucination Details

ChatGPT produced a list of luxury and exotic cars supposedly comparable to a Ferrari 812 Superfast. The Court found that mere AI-generated association of vehicles based on "economic context and competition position" is insufficient under EU law principles requiring real-world comparability from the perspective of an average consumer.

Ruling/Sanction

The Court rejected the appellant’s valuation arguments wholesale. It stressed that serious, human-verified reference vehicle comparisons were mandatory and that ChatGPT lists could not establish the legally required comparability standard under Dutch and EU law (Art. 110 TFEU). No monetary sanction imposed, but appellant’s entire case collapsed on evidentiary grounds.

Key Judicial Reasoning

The Court reasoned that a list generated by an AI program like ChatGPT, without rigorous control or verification, is inadmissible for evidentiary purposes. AI outputs lack the nuanced judgment necessary to assess "similar vehicles" under Art. 110 TFEU and Dutch bpm tax rules. It underscored that the test is based on the perceptions of a human average consumer, not algorithmic proximity.

Dowlah v. Professional Staff Congress NY SC (USA) 30 May 2024 Pro Se Litigant Unidentified Several non-existent cases Caution to plaintiff
Robert Lafayette v. Blueprint Basketball et al Vermont SC (USA) 26 April 2024 Pro Se Litigant Implied
Fabricated Case Law (2)
Order to Show Cause
Plumbers & Gasfitters Union v. Morris Plumbing E.D. Wisconsin (USA) 18 April 2024 Lawyer Implied 1 fake citation Warning
Grant v. City of Long Beach 9th Cir. CA (USA) 22 March 2024 Lawyer Unidentified
Fabricated Case Law (2)
Misrepresented Case Law (13)
Striking of Brief + Dismissal of Appeal

AI Use

The appellants’ lawyer submitted an opening brief riddled with hallucinated cases and mischaracterizations. The court did not directly investigate the technological origin but cited the systematic errors as consistent with known AI-generated hallucination patterns.

Hallucination Details

Two cited cases were completely nonexistent. Additionally, a dozen cited decisions were badly misrepresented, e.g., Hydrick v. Hunter and Wall v. County of Orange were cited for parent–child removal claims when they had nothing to do with such issues.

Ruling/Sanction

The Ninth Circuit struck the appellants' opening brief under Circuit Rule 28–1 and dismissed the appeal. The panel emphasized that fabricated citations and grotesque misrepresentations violate Rule 28(a)(8)(A) requirements for arguments with coherent citation support.

Michael Cohen Matter SDNY (USA) 20 March 2024 Pro Se Litigant Google Bard 3 fake cases No Sanction on Cohen (Lawyer expected to verify); Underlying motion denied

AI Use

Michael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases.

Hallucination Details

Cohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility.

Ruling/Sanction

Judge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits.

Key Judicial Reasoning

The court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification.

Martin v. Taylor County N.D. Texas (USA) 6 March 2024 Pro Se Litigant Implied
False Quotes Case Law (1)
Misrepresented Legal Norm (9)
Warning

In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time."

X BV in Z v. Tax Inspector The Hague CA (Netherlands) 5 March 2024 Lawyer ChatGPT Use of ChatGPT outputs as evidence without clarity about prompts or verification; no fake cases cited, but reliance on unverifiable AI outputs for valuation arguments Arguments discounted; No formal sanction but strong judicial criticism

AI Use

The appellant's authorized representative submitted arguments based on ChatGPT outputs attempting to challenge the tax valuation of real property. The representative failed to specify what exact queries were made to ChatGPT, rendering the outputs unverifiable and untrustworthy.

Hallucination Details

No explicit fabricated case law was cited. Instead, the appellant relied on generalized, unverifiable statements produced by ChatGPT to contest the capitalization factor and COVID-19 valuation discounts applied by the tax authorities.

Ruling/Sanction

The Court refused to attribute any evidentiary value to the ChatGPT-based arguments. It found that without disclosure of the input prompts and verification of AI outputs, the content was legally inadmissible as probative material. However, no sanctions were imposed, likely due to the novelty of the misuse and the lack of bad faith.

Key Judicial Reasoning

The Court emphasized that judicial proceedings demand verifiable, fact-based arguments. AI outputs that lack transparency (particularly about the underlying prompt and methodology) cannot serve as a substitute for evidence. The judgment explicitly notes that reliance on ChatGPT statements without verifiability "does not affect" the Court’s reasoning or the tax authority's burden of proof.

Finch v The Heat Group Family Court (Australia) 27 February 2024 Pro Se Litigant Implied
Fabricated Case Law (2)
Misrepresented Case Law (1)

Applicant (unrepresented) provided a list of 24 authorities claimed to show instances where MinterEllison had been restrained. Court's associate and judge found the list contained fabricated or misdescribed citations; judge characterised the provision of those authorities as an egregious instance of misleading the court but did not impose professional sanctions. Restraint application dismissed on merits.

Zhang v. Chen BC Supreme Court (Canada) 20 February 2024 Lawyer ChatGPT
Fabricated Case Law (2)
Claimant awarded costs

"[29] Citing fake cases in court filings and other materials handed up to the court isan abuse of process and is tantamount to making a false statement to the court.Unchecked, it can lead to a miscarriage of justice."

Kruse v. Karlen Miss. CA (USA) 13 February 2024 Pro Se Litigant Unidentified At least twenty-two fabricated case citations and multiple statutory misstatements. Dismissal of Appeal + Damages Awarded for Frivolous Appeal. 10000 USD

AI Use

Appellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission.

Hallucination Details

Out of twenty-four total case citations in Karlen’s appellate brief:

  • Only two were genuine (and misused).
  • Twenty-two were completely fictitious.
  • Multiple Missouri statutes and procedural rules were cited incorrectly or completely misrepresented

Ruling/Sanction

The Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status.

Key Judicial Reasoning

The Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers.

Smith v. Farwell Massachusetts (USA) 12 February 2024 Lawyer Unidentified 3 fake cases Monetary Fine (Supervising Lawyer) 2000 USD

AI Use

In a wrongful death case, plaintiff's counsel filed four memoranda opposing motions to dismiss. The drafting was done by junior staff (an associate and two recent law school graduates not yet admitted to the bar) who used an unidentified AI system to locate supporting authorities. The supervising attorney signed the filings after reviewing them for style and grammar, but admittedly did not check the accuracy of the citations and was unaware AI had been used.

Hallucination Details

Judge Brian A. Davis noticed citations "seemed amiss" and, after investigation, could not locate three cases cited in the memoranda. These were fictitious federal and state case citations.

Ruling/Sanction

After being questioned, the supervising attorney promptly investigated, admitted the citations were fake and AI-generated, expressed sincere contrition, and explained his lack of familiarity with AI risks. Despite accepting the attorney's candor and lack of intent to mislead, Judge Davis imposed a $2,000 monetary sanction on the supervising counsel, payable to the court.

Key Judicial Reasoning

The court found that sanctions were warranted because counsel failed to take "basic, necessary precautions" (i.e., verifying citations) before filing. While the sanction was deemed "mild" due to the attorney's candor and unfamiliarity with AI (distinguishing it from Mata's bad faith finding), the court issued a strong warning that a defense based on ignorance "will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known". The case underscores the supervisory responsibilities of senior attorneys.

Park v. Kim 2nd. Cir. CA (USA) 30 January 2024 Lawyer ChatGPT
Fabricated Case Law (1)
Referral to Grievance Panel + Order to Disclose Misconduct to Client.

AI Use

Counsel admitted using ChatGPT to find supporting case law after failing to locate precedent manually. She cited a fictitious case (Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)) in the reply brief, never verifying its existence.

Hallucination Details

Only one hallucinated case was cited in the reply brief: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014). When asked to produce the case, Counsel admitted it did not exist, blaming reliance on ChatGPT.

Ruling/Sanction

The Court referred Counsel to the Second Circuit’s Grievance Panel for further investigation and possible discipline. Lee was ordered to furnish a copy of the decision (translated if necessary) to her client and to file certification of compliance.

Key Judicial Reasoning

The Court emphasized that attorneys must personally verify the existence and accuracy of all authorities cited. Rule 11 requires a reasonable inquiry, and no technological novelty excuses failing to meet that standard. The Second Circuit cited Mata v. Avianca approvingly, confirming that citing fake cases amounts to abusing the adversarial system.

Matter of Samuel NY Country Court (USA) 11 January 2024 Lawyer Unidentified
Fabricated Case Law (1)
Misrepresented Case Law (1), Legal Norm (7)
Striking of Filing + Sanctions Hearing Scheduled

AI Use

Osborne’s attorney, under time pressure, submitted reply papers heavily relying on a website or tool that used generative AI. The submission included fabricated judicial authorities presented without independent verification. No admission by the lawyer was recorded, but the court independently verified the error.

Hallucination Details

Of the six cases cited in the October 11, 2023 reply, five were found to be either fictional or materially erroneous. A basic Lexis search would have revealed the fabrications instantly. The court drew explicit comparisons to the Mata v. Avianca fiasco.

Ruling/Sanction

The court struck the offending reply papers from the record and ordered the attorney to appear for a sanctions hearing under New York’s Rule 130-1.1. Potential sanctions include financial penalties or other disciplinary measures.

Key Judicial Reasoning

The court emphasized that while the use of AI tools is not forbidden per se, attorneys must personally verify all outputs. The violation was deemed "frivolous conduct" because the lawyer falsely certified the validity of the filing. The judge stressed the dangers to the judicial system from fictional citations: wasting time, misleading parties, degrading trust in courts, and harming the profession’s reputation.

Harber v. HMRC (UK) 4 December 2023 Pro Se Litigant Unidentified 9 Fake Tribunal Decisions No Sanction on Litigant; Warning implied for lawyers.

AI Use

Catherine Harber, a self-represented taxpayer appealing an HMRC penalty, submitted a document citing nine purported First-Tier Tribunal decisions supporting her position regarding "reasonable excuse". She stated the cases were provided by "a friend in a solicitor's office" and acknowledged they might have been generated by AI. ChatGPT was mentioned as a likely source.

Hallucination Details

The nine cited FTT decisions (names, dates, summaries provided) were found to be non-existent after checks by the Tribunal and HMRC. While plausible, the fake summaries contained anomalies like American spellings and repeated phrases. Some cited cases resembled real ones, but those real cases actually went against the appellant.

Ruling/Sanction

The Tribunal factually determined the cited cases were AI-generated hallucinations. It accepted Mrs. Harber was unaware they were fake and did not know how to verify them. Her appeal failed on its merits, unrelated to the AI issue. No sanctions were imposed on the litigant.

Key Judicial Reasoning

The Tribunal emphasized that submitting invented judgments was not harmless, citing the waste of public resources (time and money for the Tribunal and HMRC). It explicitly endorsed the concerns raised in the US Mata decision regarding the various harms flowing from fake opinions. While lenient towards the self-represented litigant, the ruling implicitly warned that lawyers would likely face stricter consequences. This was the first reported UK decision finding AI-generated fake cases cited by a litigant