AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (979 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
Bevins v. Colgate-Palmolive Co. E.D. Pa. (USA) 10 April 2025 Lawyer Unidentified
Fabricated Case Law (2)
Striking of Counsel’s Appearance + Referral to Bar Authorities + Client Notification Order

AI Use

Counsel filed opposition briefs citing two nonexistent cases. The court suspected generative AI use based on "hallucination" patterns but Counsel neither admitted nor explained the citations satisfactorily. Failure to comply with a standing AI order aggravated sanctions.

Hallucination Details

Two fake cases cited. Citation numbers and Westlaw references pointed to irrelevant or unrelated cases. No affidavit or real case documents were produced when ordered.

Ruling/Sanction

Counsel's appearance was struck with prejudice. The Court ordered notification to the State Bar of Pennsylvania and the Eastern District Bar. Consel was required to inform his client, Bevins, of the sanctions and the need for new counsel if re-filing.

Bischoff v. South Carolina Department of Education Admin Law Court, S.C. (USA) 10 April 2025 Pro Se Litigant Implied Fake citations Warning

The court held that: "It is likely that Appellant employed argument generated by an artificial intelligence (AI) program which contained the fictitious case citation and cautions Appellant that many harms flow from the use of non-existent case citations and fake legal authority generated by AI programs, including but not limited to the waste of judicial resources and time and waste of resources and time of the opposing party. Were courts to unknowingly rely upon fictitious citations, citizens and future litigants might question the validity of court decisions and the reputation of judges. If, alternatively, Appellant's use of a fictitious case was not the result of using an AI program, but was instead a conscious act of the Appellant, Appellant's action could be deemed a fraud on the Court. Appellant is hereby expressly warned that submission of fictitious case authorities may subject Appellant to sanctions under the S.C. Frivolous Proceedings Act, S.C. Code Ann. § 15-36-10(Supp. 2024)."

Shekartz v. Assuta Ashdod Ltd Israel (Israel) 7 April 2025 Lawyer Implied 25 fake citations Monetary sanction 7000 ILS
Ayinde v. Borough of Haringey High Court (UK) 3 April 2025 Lawyer Unidentified
Fabricated Case Law (5)
Misrepresented Legal Norm (1)
Wasted costs order; Partial disallowance of Claimant’s costs; Order to send transcript to Bar Standards Board and Solicitors Regulation Authority 11000 GBP

AI Use

The judgment states that the only other explanation for the fabricated cases was the use of artificial intelligence.

Hallucination Details

The following five nonexistent cases were cited:

  • R (El Gendi) v Camden [2020] EWHC 2435 (Admin)
  • R (Ibrahim) v Waltham Forest [2019] EWHC 1873
  • R (H) v Ealing [2021] EWHC 939 (Admin)
  • R (KN) v Barnet [2020] EWHC 1066 (Admin)
  • R (Balogun) v Lambeth [2020] EWCA Civ. 1442

Ruling/Sanction

The court imposed wasted costs orders against both barrister and solicitor, reduced the claimant’s recoverable costs, and ordered the judgment to be provided to the BSB and SRA.

Daniel Jaiyong An v. Archblock, Inc. Delaware Chancery (USA) 3 April 2025 Pro Se Litigant Implied
False Quotes Case Law (2)
Misrepresented Case Law (2)
Motion denied with prejudice; no immediate sanction imposed, but petitioner formally warned and subject to future certification and sanctions

AI Use

The petitioner submitted a motion to compel discovery that contained several fabricated or misleading citations. The court explicitly stated that the motion bore hallmarks of generative AI use and referenced ChatGPT’s known risk of “hallucinations.” Although the petitioner did not admit AI use, the court found the origin clear and required future filings to include a GenAI usage certification.

Hallucination Details

Examples included:

  • Terramar Retail Centers, LLC v. Marion #2-Seaport Trust – cited for discovery principles it did not contain
  • Deutsch v. ZST Digital Networks, Inc. – quoted for a sentence not found in the opinion
  • Production Resources Group, LLC v. NCT Group, Inc. – attributed with a quote that appears nowhere in the case or legal databases

Court verified via Westlaw that some phrases returned only the petitioner’s motion as a result.

Ruling/Sanction

Motion to compel denied with prejudice. No immediate monetary sanction imposed, but petitioner was warned that further submission of fabricated authority may result in sanctions including monetary penalties or dismissal. Future filings must include a certification regarding the use of generative AI.

Key Judicial Reasoning

The Vice Chancellor emphasized that GenAI can benefit courts and litigants, but careless use that results in fictitious legal authorities wastes resources and harms judicial integrity.

Zzaman v. HMRC (UK) 3 April 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Misrepresented Case Law (7), Legal Norm (2)
Warning

Plaintiff had disclosed the use of AI in preparing his statement of case. The court noted:

"29. However, our conclusion was that Mr Zzaman’s statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman’s statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC)). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate.

Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully “understand” what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal).

There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court’s time and that of opposing parties."

Bangholme Investments Pty Ltd v Greater Dandenong CC Victorian CAT (Australia) 3 April 2025 Pro Se Litigant Unidentified
Misrepresented Legal Norm (1)

Alan Hood relied on an AI search that inferred the Council was required to notify objectors. The Tribunal found that inference 'plainly incorrect', noting Hood had received the requisite notice and should have read the documents; the Tribunal nevertheless exercised discretion to join him.

Dehghani v. Castro New Mexico DC (USA) 2 April 2025 Lawyer Unidentified
Fabricated Case Law (6)
False Quotes Case Law (1)
Monetary sanction; required CLE on legal ethics and AI; mandatory self-reporting to NM and TX state bars; report of subcontractor to NY state bar; required notification to LAWCLERK 1500 USD

AI Use

Counsel hired a freelance attorney through LAWCLERK to prepare a filing. He made minimal edits and admitted not verifying any of the case law before signing. The filing included multiple fabricated cases and misquoted others. The court concluded these were AI hallucinations, likely produced by ChatGPT or similar.

Hallucination Details

Examples of non-existent cases cited include:

Moncada v. Ruiz, Vega-Mendoza v. Homeland Security, Morales v. ICE Field Office Director, Meza v. United States Attorney General, Hernandez v. Sessions, and Ramirez v. DHS.

All were either entirely fictitious or misquoted real decisions.

Ruling/Sanction

The Court sanctioned Counsel by:

  • Ordering a $1,500 fine
  • Requiring a 1-hour CLE on AI/legal ethics
  • Ordering him to self-report to the New Mexico and Texas bars
  • Ordering him to report the freelance lawyer to the New York bar
  • Requiring notification of LAWCLERK
  • Requiring proof of compliance by May 15, 2025

Key Judicial Reasoning

The court emphasized that counsel’s failure to verify cited cases, coupled with blind reliance on subcontracted work, constituted a violation of Rule 11(b)(2). The court analogized to other AI-sanctions cases. While the fine was modest, the court imposed significant procedural obligations to ensure deterrence.

Mazurek et al. v. Thomazoni Parana State (Brazil) 2 April 2025 Lawyer ChatGPT
Fabricated Case Law (3)
Referral to the bar; Monetary fine (1% of case value)
D'Angelo v. Vaught Illinois (USA) 2 April 2025 Lawyer Archie (Smokeball) Fabricated citation Monetary sanction 2000 USD
Boggess v. Chamness E.D. Texas (USA) 1 April 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Argument ignored
Source: Jesse Schaefer
Sanders v. United States Fed. claims court (USA) 31 March 2025 Pro Se Litigant Implied
Fabricated Case Law (4)
Misrepresented Case Law (1), Legal Norm (1)
Warning

AI Use

The plaintiff did not admit to using AI, but the court inferred likely use due to the submission of fabricated citations matching the structure and behavior typical of generative AI hallucinations. The decision referenced public concerns about AI misuse and cited specific examples of federal cases where similar misconduct occurred.

Hallucination Details

Plaintiff cited:

  • Tucker v. United States, 24 Cl. Ct. 536 (1991) – does not exist
  • Fargo v. United States, 184 F.3d 1096 (Fed. Cir. 1999) – fabricated citation pointing to an unrelated Ninth Circuit case
  • Bristol Bay Native Corporation v. United States, 87 Fed. Cl. 122 (2009) – fictional
  • Quantum Construction, Inc. v. United States, 54 Fed. Cl. 432 (2002) – nonexistent
  • Hunt Building Co., LLC v. United States, 61 Fed. Cl. 243 (2004) – real case misused; contains no mention of unjust enrichment

Ruling/Sanction

The court granted the government’s motion to dismiss for lack of subject matter jurisdiction under Rule 12(b)(1). Although the court found a clear Rule 11 violation, it opted not to sanction the plaintiff, citing the evolving context of AI use and the absence of bad faith. A formal warning was issued, with notice that future hallucinated filings may trigger sanctions.

Key Judicial Reasoning

Judge Roumel noted that plaintiff’s attempt to rely on fictional case law was a misuse of judicial resources and a disservice to her own advocacy. The court cited multiple precedents addressing hallucinated citations and AI misuse, stating clearly that while leeway is granted to pro se litigants, the line is crossed when filings rely on fictitious law.

McKeown v. Paycom Payroll LLC W.D. Oklahoma (USA) 31 March 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Submission stricken out, and warning

AI Use

Although AI was not named and Plaintiff denied intentional fabrication, the court considered the citation (Adamov, 779 F.3d 851, 860 (8th Cir. 2015)) to be plainly fictitious. It noted the possibility that Plaintiff used generative AI tools, given the fabricated citation's plausible-sounding structure and mismatch with existing precedent.

Hallucination Details

Plaintiff submitted fabricated legal authorities in at least two filings, despite being explicitly warned by the court after the first incident. The false case cited in her sur-reply could not be located in any legal database. When asked to produce it, she responded that she had likely “garbled” the citation but provided no plausible alternative or correction.

Ruling/Sanction

The court declined to dismiss the action as a sanction, citing the limitations pro se litigants face in accessing reliable legal research tools. However, it granted the defendant’s motion to strike Plaintiff’s two unauthorized sur-replies and formally warned her that further violations of Rule 11 would lead to sanctions, including monetary penalties, filing restrictions, or dismissal.

SQBox Solutions Ltd. v. Oak BC Civil Resolution Tribunal (Canada) 31 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
False Quotes Legal Norm (2)
Misrepresented Case Law (4)
Litigant lost on merits

"By relying on inaccurate and false AI submissions, Mr. Oak hurts his own case. I understand that Mr. Oak himself might not be aware that the submissions are misleading, but they are his submissions and he is responsible for them. "

Source: Steve Finlay
AQ v. BT CRT (Canada) 28 March 2025 Pro Se Litigant Implied
Fabricated Case Law (2), Legal Norm (1)
Misrepresented Case Law (1), Legal Norm (1)
Arguments ignored
LYJ v. Occupational Therapy Board of Australia Queensland (Australia) 26 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
No sanction; Fabrication noted; Warning issued regarding AI use

AI Use

The applicant cited Crime and Misconduct Commission v Chapman [2007] QCA 283 in support of a key submission. The Tribunal was unable to locate such a case. It queried ChatGPT, which returned a detailed but entirely fictitious account of a case that does not exist. The Tribunal attached Queensland’s AI usage guidelines to its reasons and emphasized that the responsibility for accuracy lies with the party submitting the material.

Ruling/Sanction

The fabricated case was disregarded. The Tribunal granted a stay but issued a strong warning: litigants are responsible for understanding the limitations of AI tools and must verify all submitted material. The inclusion of fictitious material wastes time, diminishes credibility, and undermines the process.

Key Judicial Reasoning

Citing non-existent authorities "weakens their arguments. It raises issues about whether their submission can be considered as accurate and reliable. It may cause the Tribunal to be less trusting of other submissions which they make. It wastes the time for Tribunal members in checking and addressing these hallucinations. It causes a significant waste of public resources."

Kruglyak v. Home Depot U.S.A., Inc. W.D. Virginia (USA) 25 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
Misrepresented Case Law (1)
No monetary sanctions; Warning

AI Use

Kruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources.

Hallucination Details

The plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones.

Ruling/Sanction

Magistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur.

Key Judicial Reasoning

The court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action.

Anonymous v. Anonymous Israel (Israel) 24 March 2025 Fabricated citations Application dismissed 4000 ILS
Francois v. Medina Supreme Court, NY (USA) 24 March 2025 Lawyer Unidentified Fabricated citations Warning
Buckner v. Hilton Global W.D. Kentucky (USA) 21 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1), Exhibits or Submissions (1)
Warning

In a subsequent Order, the court pointed out that "This Court's opinion pointing out Buckner's citation to nonexistent case law, along with its implications, is an issue for appeal and not a valid basis for recusal. "

Loyer v. Wayne County Michigan E.D. Michigan (USA) 21 March 2025 Lawyer Unidentified
Fabricated Case Law (1)
Misrepresented Exhibits or Submissions (1)
Plaintiff's counsel ordered to attend an ethics seminar
Source: Jesse Schaefer
Williams v. Capital One Bank D. DC (USA) 18 March 2025 Pro Se Litigant CoCounsel
Fabricated Case Law (1)
Misrepresented Case Law (1)
Case dismissed with prejudice for failure to state a claim. No monetary sanction imposed, but the court issued a formal warning

AI Use

While not formally admitted, Plaintiff’s opposition brief referred to “legal generative AI program CoCounsel,” and the court noted that the structure and citation pattern were consistent with AI-generated output. Capital One was unable to verify several case citations, prompting the court to scrutinize the submission.

Hallucination Details

At least one case was fully fabricated, and another was a real case misattributed to the wrong jurisdiction and reporter. The court emphasized that it could not determine whether the mis-citations were the result of confusion, poor research, or hallucinated AI output—but the burden rested with the party filing them.

Ruling/Sanction

The court dismissed the complaint with prejudice, noting Plaintiff had already filed and withdrawn a prior version and had had full opportunity to amend. Though it did not impose monetary sanctions, it issued a strong warning and directed Plaintiff to notify other courts where he had similar pending cases if any filings included erroneous AI-generated citations.

Stevens v. BJC Health System Missouri CA (USA) 18 March 2025 Pro Se Litigant Implied 6 fabricated citations Warning
Alkuda v. McDonald Hopkins Co., L.P.A. N.D. Ohio (USA) 18 March 2025 Pro Se Litigant Implied Fake Citations Warning
Condominium v. Lati Initiation and Construction Ltd Israel (Israel) 17 March 2025 Implied Three fake citations Case dismissed 1000 ILS
LMN v. STC (No. 2) (New Zealand) 17 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning
Reddan & An Bord Pleanála v. Trustees of Nenagh Golf Club (Ireland) 13 March 2025 Pro Se Litigant Unidentified
Fabricated Exhibits or Submissions (1), other (1)
Misrepresented Legal Norm (4), other (1)
Application for Judicial Review Denied; Express Judicial Rebuke for Misuse of AI

AI Use

Justice Nolan suspected that Reddan's submissions, especially references to "subornation to perjury" and Constitutional Article 40 rights, were AI-generated, exhibiting typical hallucination patterns (pseudo-legal concepts, inappropriate cut-and-paste fragments). Reddan did not admit using AI but relied on internet-sourced legal arguments that closely resembled LLM-style outputs.

Hallucination Details

Inappropriate invocation of "subornation to perjury," a term foreign to Irish law. Constitutional and criminal law citations (Article 40, Non-Fatal Offences Against the Person Act) irrelevant to judicial review context. Assertions framed in hyperbolic, sensationalist terms without factual or legal basis. General incoherence of pleadings, consistent with AI-generated pseudo-legal text

Ruling/Sanction

The High Court refused leave to apply for judicial review on all nine grounds. While no formal financial sanction was imposed, Justice Nolan issued a sharp rebuke, highlighting the improper use of AI and warning against making scurrilous, unverified allegations in legal pleadings. The Court stressed that misuse of AI-generated material could itself amount to an abuse of the judicial process.

A v. B Florence (Italy) 13 March 2025 Lawyer ChatGPT
Fabricated Case Law (1)
False Quotes Case Law (1)
No financial sanction; Formal Judicial Reprimand; Findings of procedural misuse

AI Use

The respondent retailer's defense cited Italian Supreme Court judgments that did not exist, claiming support for their arguments regarding lack of subjective bad faith. During subsequent hearings, it was admitted that these fake citations were generated by ChatGPT during internal research by an assistant, and the lead lawyer had failed to independently verify them.

Hallucination Details

Cited fabricated cassation rulings allegedly supporting subjective good faith defenses. No such rulings could be found in official databases; court confirmed their nonexistence. Hallucinated decisions related to counterfeit goods sales defenses

Ruling/Sanction

The court declined to impose a financial sanction under Article 96 Italian Code of Civil Procedure .

Mark Lillard v. Offit Kurman, P.A. SC Delaware (USA) 12 March 2025 Pro Se Litigant Unidentified
False Quotes Case Law (2)
Misrepresented Case Law (2)
AI-use certification required for future filings
Arnaoudoff v. Tivity Health Incorporated D. Arizona (USA) 11 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (3)
Misrepresented Case Law (1)
Court ignored fake citations and granted motion to correct the record
Sheets v. Presseller M.D. Florida (USA) 11 March 2025 Pro Se Litigant Implied Allegations by the other party that brief was AI-generated Warning
210S LLC v. Di Wu Hawaii (USA) 11 March 2025 Pro Se Litigant Implied Fictitious citation and misrepresentation Warning
Nguyen v. Wheeler E.D. Arkansas (USA) 3 March 2025 Lawyer Implied
Fabricated Case Law (1)
Monetary sanction 1000 USD

AI Use

Nguyen did not confirm which AI tool was used but acknowledged that AI “may have contributed.” The court inferred the use of generative AI from the pattern of hallucinated citations and accepted Nguyen’s candid acknowledgment of error, though this did not excuse the Rule 11 violation.

Hallucination Details

Fictitious citations included:

  • Kraft v. Brown & Williamson Tobacco Corp., 668 F. Supp. 2d 806 (E.D. Ark. 2009)
  • Young v. Johnson & Johnson, 983 F. Supp. 2d 747 (E.D. Ark. 2013)
  • Carpenter v. Auto-West Inc., 553 S.W.3d 480 (Ark. 2018)
  • Miller v. Hall, 360 S.W.2d 704 (Ark. 1962)

None of these cases existed in Westlaw or Lexis, and the quotes attributed to them were fabricated.

Outcome / Sanction

The court imposed a $1,000 monetary sanction on Counsel for citing non-existent case law in violation of Rule 11(b). It found her conduct unjustified, despite her apology and explanation that AI may have been involved. The court emphasized that citing fake legal authorities is an abuse of the adversary system and warrants sanctions.

Ahmad Harsha v. Reuven Bornovski (Israel) 2 March 2025 Lawyer Implied Fabricated citations The defendant was given the opportunity to submit amended summaries in response 4000 ILS
Dog Rights v. Ministry of Agriculture High Court (Israel) 28 February 2025 Lawyer Impled
Fabricated Case Law (1)
False Quotes Case Law (1)
Petition dismissed on threshold grounds for lack of clean hands and inadequate legal foundation. Petitioner ordered to pay costs 7000 ILS

AI Use

The judgment refers repeatedly to use of “AI-based websites” and “artificial intelligence hallucinations,” and quotes prior decisions warning against reliance on AI without verification. Although no specific tool was named, the Court inferred use based on the stylistic pattern and total absence of real citations. Petitioner provided no clarification and ultimately sought to withdraw the petition once challenged.

Hallucination Details

The legal authorities cited in the petition included:

  • Case names and citations that do not exist in Israeli legal databases or official court archives
  • Quotations and doctrinal references attributed to rulings that were entirely fictitious
  • Systematic internal inconsistencies and citation errors typical of AI-generated legal writing

The Court made efforts to locate the decisions independently but failed, and the petitioner never supplied the sources after being ordered to do so.

Ruling/Sanction

The Court dismissed the petition in limine (on threshold grounds), citing “lack of clean hands” and “deficient legal infrastructure.” It imposed a ₪7,000 costs order against the petitioner and referred to the growing body of jurisprudence on AI hallucinations. The Court explicitly warned that future petitions tainted by similar conduct would face harsher responses, including possible professional discipline.

Key Judicial Reasoning

Justice Noam Sohlberg, writing for the panel, observed that citing fictitious legal authorities—whether by AI or not—is as egregious as factual misrepresentation. "there is no justification for distinguishing, factually, between one form of deception and another. Deception that would justify the dismissal of a petition due to lack of clean hands—such deception, whether of this kind or that—is invalid in its essence; both forms demand proper judicial response. Their legal identity is the same."

Bunce v. Visual Technology Innovations, Inc. E.D. Pa. (USA) 27 February 2025 Lawyer ChatGPT
Fabricated Case Law (2)
Misrepresented Case Law (1)
Outdated Advice Overturned Case Law (2)
Monetary Sanction + Mandatory CLE on AI and Legal Ethics 2500 USD

AI Use

Counsel admitted using ChatGPT to draft two motions (Motion to Withdraw and Motion for Leave to Appeal), without verifying the cases or researching the AI tool’s reliability.

Hallucination Details

2 Fake cases:

  • McNally v. Eyeglass World, LLC, 897 F. Supp. 2d 1067 (D. Nev. 2012) — nonexistent
  • Behm v. Lockheed Martin Corp., 460 F.3d 860 (7th Cir. 2006) — nonexistent

Misused cases:

  • Degen v. United States, cited for irrelevant proposition
  • Dow Chemical Canada Inc. v. HRD Corp., cited despite later vacatur
  • Eavenson, Auchmuty Greenwald v. Holtzman, cited despite being overruled by Third Circuit precedent

Ruling/Sanction

The Court sanctioned Counsel $2,500 payable to the court and ordered him to complete at least one hour of CLE on AI and legal ethics. The opinion emphasized that deterrence applied both specifically to Counsel and generally to the profession.

Key Judicial Reasoning

Rule 11(b)(2) mandates reasonable inquiry into all legal contentions. No AI tool displaces the attorney’s personal duty. Novelty of AI tools is not a defense.

Yu Hon Tong Thomas v Centaline Property Agency High Court (Hong Kong) 26 February 2025 Pro Se Litigant Unidentified
Fabricated Exhibits or Submissions (1)
Misrepresented Case Law (1)
Merz v. Kalama W.D. Washington (USA) 25 February 2025 Pro Se Litigant Unidentified
Misrepresented Legal Norm (2)
Wadsworth v. Walmart (Morgan & Morgan) Wyoming (USA) 24 February 2025 Lawyer Internal tool (ChatGPT)
Fabricated Case Law (8)
$3k Fine + Pro Hac Vice Revoked (Drafter); $1k Fine each (Signers); Remedial actions noted. 5000 USD

AI Use

Counsel from Morgan & Morgan used the firm's internal AI platform (MX2.law, reportedly using ChatGPT) to add case law support to draft motions in limine in a product liability case concerning a hoverboard fire. This was reportedly his first time using AI for this purpose.

Hallucination Details

Eight out of nine case citations in the filed motions were non-existent or led to differently named cases. Another cited case number was real but belonged to a different case with a different judge. The legal standard description was also deemed "peculiar".

Ruling/Sanction

After defense counsel raised issues, the Judge issued an order to show cause. The plaintiffs' attorneys admitted the error, withdrew the motions, apologized, paid opposing counsel's fees related to the motions, and reported implementing new internal firm policies and training on AI use. Judge Rankin found Rule 11 violations. Sanctions imposed were: $3,000 fine on the drafter and revocation of his pro hac vice admission; $1,000 fine each on the signing attorneys for failing their duty of reasonable inquiry before signing.

Key Judicial Reasoning

The court acknowledged the attorneys' remedial steps and honesty but emphasized the non-delegable duty under Rule 11 to make a reasonable inquiry into the law before signing any filing. The court stressed that while AI can be a tool, attorneys remain responsible for verifying its output. The judge noted this was the "latest reminder to not blindly rely on AI platforms' citations".

Plonit v. Sharia Court of Appeals High Court (Israel) 23 February 2025 Lawyer Unidentified
Fabricated Case Law (1)
False Quotes Case Law (1)
Misrepresented Case Law (2), Legal Norm (1)
Petition Dismissed Outright; Warning re: Costs/Discipline.

AI Use

The petitioner’s counsel used an AI-based platform to draft the legal petition.

Hallucination Details

The petition cited 36 fabricated or misquoted Israeli Supreme Court rulings. Five references were entirely fictional, 14 had mismatched case details, and 24 included invented quotes. Upon judicial inquiry, counsel admitted reliance on an unnamed website recommended by colleagues, without verifying the information's authenticity. The Court concluded that the errors were likely the product of generative AI.

Ruling/Sanction

The High Court of Justice dismissed the petition on the merits, finding no grounds for intervention in the Sharia courts’ decisions. Despite the misconduct, no personal sanctions or fines were imposed on counsel, citing it as the first such incident to reach the High Court and adopting a lenient stance “far beyond the letter of the law.” However, the judgment was explicitly referred to the Court Administrator for system-wide attention.

Key Judicial Reasoning

The Court issued a stern warning about the ethical duties of lawyers using AI tools, underscoring that professional obligations of diligence, verification, and truthfulness remain intact regardless of technological convenience. The Court suggested that in future cases, personal sanctions on attorneys might be appropriate to protect judicial integrity.

Saxena v. Martinez-Hernandez et al. D. Nev. (USA) 18 February 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
False Quotes Case Law (1)
Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor

AI Use

The plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them.

Hallucination Details

  • Spokane v. Douglass turned out to conflate unrelated decisions and misused citations from other cases
  • Hummel v. State could not be found in any Nevada or national database; citation matched an unrelated Oregon case

The court found no plausible explanation for these citations other than AI generation or outright fabrication.

Ruling/Sanction

The court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith.

Key Judicial Reasoning

Citing Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief".

Unnamed Brazilian litigant (Brazil) 18 February 2025 Lawyer ChatGPT Multiple fabricated case citations and doctrinal references Appeal partially granted (reintegration suspended, rent imposed), but litigant sanctioned for bad-faith litigation; 10% fine on the updated value of the case; copy of filing sent to OAB-SC for disciplinary review

AI Use

The appellant’s counsel admitted to having used ChatGPT, claiming the submission of false case law was the result of “unintentional use.” The fabricated citations were used in an appeal against a reintegration of possession order, in favor of the appellant’s stepmother and father’s heirs.

Hallucination Details

The brief contained numerous non-existent judicial precedents and references to legal doctrine that were either incorrect or entirely fictional. The court described them as “fabricated” and considered them serious enough to potentially mislead the court.

Ruling/Sanction

While the 6th Civil Chamber temporarily suspended the reintegration order, it further imposed a 10% fine on the value of the claim for bad-faith litigation and ordered that a copy of the appeal be forwarded to the Santa Catarina section of the Brazilian Bar Association (OAB/SC) for further investigation.

Key Judicial Reasoning

The court emphasized that the legal profession is a public calling entailing duties and responsibilities. It cautioned that AI must be used “with caution and restraint”. The chamber unanimously supported the sanction.

Geismayr v. The Owners, Strata Plan KAS 1970 Civil Resolution Tribunal (Canada) 14 February 2025 Pro Se Litigant Copilot
Fabricated Case Law (9)
Misrepresented Case Law (1)
Citations ignored
Goodchild v State of Queensland Queensland IRC (Australia) 13 February 2025 Pro Se Litigant "Internet searches"
Fabricated Case Law (5)
Relevant submissions ignored

"The Commission accepts the Applicant's explanation. Given that there appears to be significant doubt over whether the authorities cited by the Applicant represent actual decisions from the Fair Work Commission, I will give the authorities cited by the Applicant no weight in determining whether she has provided an explanation for the delay. This appears to be a salutary lesson for litigants in the dangers of relying on general search engines on the internet or artificial intelligence when preparing legal documents."

Luck v Commonwealth of Australia Federal Court (Australia) 11 February 2025
Fabricated Case Law (2)
The court dismissed the applicant's interlocutory application for disqualification and referral to a Full Court.
QWYN and Commissioner of Taxation Administrative Review Tribunal of Australia (Australia) 5 February 2025 Lawyer Copilot
False Quotes Doctrinal Work (1)
The Tribunal affirmed the decision under review, rejecting the applicant's submissions based on the AI-generated content.

"The Applicant engaged the Copilot [Microsoft’s Artificial Intelligence product] in a range of probing questions pertaining to superannuation and taxation matters, upon which in part, it returned the following responses:

The Explanatory Memorandum to the Taxation Laws Amendment (Superannuation) Bill 1992, which introduced the new regime taxing superannuation benefits, states in paragraph 2.20 that “the Bill will provide a tax rebate of 15 per cent for disability superannuation pensions. This will apply to all disability pensions, irrespective of whether they are paid from a taxed or an untaxed source. The rebate recognises that disability pensions are paid as compensation for the loss of earning capacity and are not merely a form of retirement income.

  1. I have examined the Explanatory Memorandum to the Taxation Laws Amendment (Superannuation) Bill 1992. I was unable to locate any paragraph in that document in the same or similar terms to the paragraph generated by Copilot. It did not contain a paragraph 2.20.
  2. It has been noted by others that AI bots are prone to hallucinations.[35] That appears to be what has happened here. It is my assessment that submitting unverified material generated by AI, is not consistent with a party’s duty to use their best endeavours to assist the Tribunal to achieve its statutory objectives. To expect the Tribunal to read and consider material which a party does not know is authentic impedes the Tribunal’s attempts to provide a mechanism of review that ensures that applications are resolved as quickly and with as little expense as a proper consideration of the issues permits.
  3. Nothing in the remainder of the applicant’s submissions altered my view that the untaxed element of the benefit should be taxed under Subdivision 301-B."
Valu v. Minister for Immigration and Multicultural Affairs (Australia) 31 January 2025 Lawyer ChatGPT
Fabricated Case Law (17)
False Quotes Exhibits or Submissions (8)
Referral to Legal Services Commissioner

AI Use

Counsel used ChatGPT to generate a summary of cases for a submission, which included fictitious Federal Court decisions and invented quotes from a Tribunal ruling. He inserted this output into the brief without verifying the sources. Counsel later admitted this under affidavit, citing time pressure, health issues, and unfamiliarity with AI's risks. He noted that guidance from the NSW Supreme Court was only published after the filing.

Hallucination Details

The 25 October 2024 submission cited at least 16 completely fabricated decisions (e.g. Murray v Luton [2001] FCA 1245, Bavinton v MIMA [2017] FCA 712) and included supposed excerpts from the AAT’s ruling that did not appear in the actual decision. The Court and Minister’s counsel were unable to verify any of the cited cases or quotes.

Ruling/Sanction

Judge Skaros ordered referral to the OLSC under the Legal Profession Uniform Law (NSW) 2014, noting breaches of rules 19.1 and 22.5 of the Australian Solicitors’ Conduct Rules. The Court accepted Counsel’s apology and health-related mitigation but found that the conduct fell short of professional standards and posed systemic risks given increasing AI use in legal practice.

Key Judicial Reasoning

While acknowledging that Counsel corrected the record and showed contrition, the Court found that the damage—including wasted judicial resources and delay to proceedings—had already occurred. The ex parte email submitting corrected materials, without notifying opposing counsel, further compounded the breach. Given the public interest in safeguarding the integrity of litigation amidst growing AI integration, referral to the OLSC was deemed necessary, even without naming Counsel in the judgment.

Gonzalez v. Texas Taxpayers and Research Association W.D. Texas (USA) 29 January 2025 Lawyer Lexis Nexis's AI
Fabricated Case Law (4)
Misrepresented Case Law (1)
Plaintiff's response was stricken and monetary sanctions were imposed. 3961 USD

In the case of Gonzalez v. Texas Taxpayers and Research Association, the court found that Plaintiff's counsel, John L. Pittman III, included fabricated citations, miscited cases, and misrepresented legal propositions in his response to a motion to dismiss. Pittman initially denied using AI but later admitted to using Lexis Nexis's AI citation generator. The court granted the defendant's motion to strike the plaintiff's response and imposed monetary sanctions on Pittman, requiring him to pay $3,852.50 in attorney's fees and $108.54 in costs to the defendant. The court deemed this an appropriate exercise of its inherent power due to the abundance of technical and substantive errors in the brief, which inhibited the defendant's ability to efficiently respond.

Hanna v Flinders University South Australia (Australia) 29 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Olsen v Finansiel Stabilitet High Court (UK) 25 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Exhibits or Submissions (2), Legal Norm (2)
No contempt, but might bear out on costs