AI Hallucination Cases

This database tracks legal decisions1 I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.

Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.

As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.

While seeking to be exhaustive (914 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2 Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025) - J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)

If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)

Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports Report icon in the database for examples, and reach out to me for a demo !

For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.

State
Party
Nature – Category
Nature – Subcategory

Case Court / Jurisdiction Date ▼ Party Using AI AI Tool Nature of Hallucination Outcome / Sanction Monetary Penalty Details Report(s)
AQ v. BT CRT (Canada) 28 March 2025 Pro Se Litigant Implied
Fabricated Case Law (2), Legal Norm (1)
Misrepresented Case Law (1), Legal Norm (1)
Arguments ignored
LYJ v. Occupational Therapy Board of Australia Queensland (Australia) 26 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
No sanction; Fabrication noted; Warning issued regarding AI use

AI Use

The applicant cited Crime and Misconduct Commission v Chapman [2007] QCA 283 in support of a key submission. The Tribunal was unable to locate such a case. It queried ChatGPT, which returned a detailed but entirely fictitious account of a case that does not exist. The Tribunal attached Queensland’s AI usage guidelines to its reasons and emphasized that the responsibility for accuracy lies with the party submitting the material.

Ruling/Sanction

The fabricated case was disregarded. The Tribunal granted a stay but issued a strong warning: litigants are responsible for understanding the limitations of AI tools and must verify all submitted material. The inclusion of fictitious material wastes time, diminishes credibility, and undermines the process.

Key Judicial Reasoning

Citing non-existent authorities "weakens their arguments. It raises issues about whether their submission can be considered as accurate and reliable. It may cause the Tribunal to be less trusting of other submissions which they make. It wastes the time for Tribunal members in checking and addressing these hallucinations. It causes a significant waste of public resources."

Kruglyak v. Home Depot U.S.A., Inc. W.D. Virginia (USA) 25 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
Misrepresented Case Law (1)
No monetary sanctions; Warning

AI Use

Kruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources.

Hallucination Details

The plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones.

Ruling/Sanction

Magistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur.

Key Judicial Reasoning

The court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action.

Anonymous v. Anonymous Israel (Israel) 24 March 2025 Fabricated citations Application dismissed 4000 ILS
Francois v. Medina Supreme Court, NY (USA) 24 March 2025 Lawyer Unidentified Fabricated citations Warning
Buckner v. Hilton Global W.D. Kentucky (USA) 21 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1), Exhibits or Submissions (1)
Warning

In a subsequent Order, the court pointed out that "This Court's opinion pointing out Buckner's citation to nonexistent case law, along with its implications, is an issue for appeal and not a valid basis for recusal. "

Loyer v. Wayne County Michigan E.D. Michigan (USA) 21 March 2025 Lawyer Unidentified
Fabricated Case Law (1)
Misrepresented Exhibits or Submissions (1)
Plaintiff's counsel ordered to attend an ethics seminar
Source: Jesse Schaefer
Williams v. Capital One Bank D. DC (USA) 18 March 2025 Pro Se Litigant CoCounsel
Fabricated Case Law (1)
Misrepresented Case Law (1)
Case dismissed with prejudice for failure to state a claim. No monetary sanction imposed, but the court issued a formal warning

AI Use

While not formally admitted, Plaintiff’s opposition brief referred to “legal generative AI program CoCounsel,” and the court noted that the structure and citation pattern were consistent with AI-generated output. Capital One was unable to verify several case citations, prompting the court to scrutinize the submission.

Hallucination Details

At least one case was fully fabricated, and another was a real case misattributed to the wrong jurisdiction and reporter. The court emphasized that it could not determine whether the mis-citations were the result of confusion, poor research, or hallucinated AI output—but the burden rested with the party filing them.

Ruling/Sanction

The court dismissed the complaint with prejudice, noting Plaintiff had already filed and withdrawn a prior version and had had full opportunity to amend. Though it did not impose monetary sanctions, it issued a strong warning and directed Plaintiff to notify other courts where he had similar pending cases if any filings included erroneous AI-generated citations.

Stevens v. BJC Health System Missouri CA (USA) 18 March 2025 Pro Se Litigant Implied 6 fabricated citations Warning
Alkuda v. McDonald Hopkins Co., L.P.A. N.D. Ohio (USA) 18 March 2025 Pro Se Litigant Implied Fake Citations Warning
Condominium v. Lati Initiation and Construction Ltd Israel (Israel) 17 March 2025 Implied Three fake citations Case dismissed 1000 ILS
LMN v. STC (No. 2) (New Zealand) 17 March 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning
Reddan & An Bord Pleanála v. Trustees of Nenagh Golf Club (Ireland) 13 March 2025 Pro Se Litigant Unidentified
Fabricated Exhibits or Submissions (1), other (1)
Misrepresented Legal Norm (4), other (1)
Application for Judicial Review Denied; Express Judicial Rebuke for Misuse of AI

AI Use

Justice Nolan suspected that Reddan's submissions, especially references to "subornation to perjury" and Constitutional Article 40 rights, were AI-generated, exhibiting typical hallucination patterns (pseudo-legal concepts, inappropriate cut-and-paste fragments). Reddan did not admit using AI but relied on internet-sourced legal arguments that closely resembled LLM-style outputs.

Hallucination Details

Inappropriate invocation of "subornation to perjury," a term foreign to Irish law. Constitutional and criminal law citations (Article 40, Non-Fatal Offences Against the Person Act) irrelevant to judicial review context. Assertions framed in hyperbolic, sensationalist terms without factual or legal basis. General incoherence of pleadings, consistent with AI-generated pseudo-legal text

Ruling/Sanction

The High Court refused leave to apply for judicial review on all nine grounds. While no formal financial sanction was imposed, Justice Nolan issued a sharp rebuke, highlighting the improper use of AI and warning against making scurrilous, unverified allegations in legal pleadings. The Court stressed that misuse of AI-generated material could itself amount to an abuse of the judicial process.

A v. B Florence (Italy) 13 March 2025 Lawyer ChatGPT
Fabricated Case Law (1)
False Quotes Case Law (1)
No financial sanction; Formal Judicial Reprimand; Findings of procedural misuse

AI Use

The respondent retailer's defense cited Italian Supreme Court judgments that did not exist, claiming support for their arguments regarding lack of subjective bad faith. During subsequent hearings, it was admitted that these fake citations were generated by ChatGPT during internal research by an assistant, and the lead lawyer had failed to independently verify them.

Hallucination Details

Cited fabricated cassation rulings allegedly supporting subjective good faith defenses. No such rulings could be found in official databases; court confirmed their nonexistence. Hallucinated decisions related to counterfeit goods sales defenses

Ruling/Sanction

The court declined to impose a financial sanction under Article 96 Italian Code of Civil Procedure .

Arnaoudoff v. Tivity Health Incorporated D. Arizona (USA) 11 March 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (3)
Misrepresented Case Law (1)
Court ignored fake citations and granted motion to correct the record
Sheets v. Presseller M.D. Florida (USA) 11 March 2025 Pro Se Litigant Implied Allegations by the other party that brief was AI-generated Warning
210S LLC v. Di Wu Hawaii (USA) 11 March 2025 Pro Se Litigant Implied Fictitious citation and misrepresentation Warning
Nguyen v. Wheeler E.D. Arkansas (USA) 3 March 2025 Lawyer Implied
Fabricated Case Law (1)
Monetary sanction 1000 USD

AI Use

Nguyen did not confirm which AI tool was used but acknowledged that AI “may have contributed.” The court inferred the use of generative AI from the pattern of hallucinated citations and accepted Nguyen’s candid acknowledgment of error, though this did not excuse the Rule 11 violation.

Hallucination Details

Fictitious citations included:

  • Kraft v. Brown & Williamson Tobacco Corp., 668 F. Supp. 2d 806 (E.D. Ark. 2009)
  • Young v. Johnson & Johnson, 983 F. Supp. 2d 747 (E.D. Ark. 2013)
  • Carpenter v. Auto-West Inc., 553 S.W.3d 480 (Ark. 2018)
  • Miller v. Hall, 360 S.W.2d 704 (Ark. 1962)

None of these cases existed in Westlaw or Lexis, and the quotes attributed to them were fabricated.

Outcome / Sanction

The court imposed a $1,000 monetary sanction on Counsel for citing non-existent case law in violation of Rule 11(b). It found her conduct unjustified, despite her apology and explanation that AI may have been involved. The court emphasized that citing fake legal authorities is an abuse of the adversary system and warrants sanctions.

Ahmad Harsha v. Reuven Bornovski (Israel) 2 March 2025 Lawyer Implied Fabricated citations The defendant was given the opportunity to submit amended summaries in response 4000 ILS
Dog Rights v. Ministry of Agriculture High Court (Israel) 28 February 2025 Lawyer Impled
Fabricated Case Law (1)
False Quotes Case Law (1)
Petition dismissed on threshold grounds for lack of clean hands and inadequate legal foundation. Petitioner ordered to pay costs 7000 ILS

AI Use

The judgment refers repeatedly to use of “AI-based websites” and “artificial intelligence hallucinations,” and quotes prior decisions warning against reliance on AI without verification. Although no specific tool was named, the Court inferred use based on the stylistic pattern and total absence of real citations. Petitioner provided no clarification and ultimately sought to withdraw the petition once challenged.

Hallucination Details

The legal authorities cited in the petition included:

  • Case names and citations that do not exist in Israeli legal databases or official court archives
  • Quotations and doctrinal references attributed to rulings that were entirely fictitious
  • Systematic internal inconsistencies and citation errors typical of AI-generated legal writing

The Court made efforts to locate the decisions independently but failed, and the petitioner never supplied the sources after being ordered to do so.

Ruling/Sanction

The Court dismissed the petition in limine (on threshold grounds), citing “lack of clean hands” and “deficient legal infrastructure.” It imposed a ₪7,000 costs order against the petitioner and referred to the growing body of jurisprudence on AI hallucinations. The Court explicitly warned that future petitions tainted by similar conduct would face harsher responses, including possible professional discipline.

Key Judicial Reasoning

Justice Noam Sohlberg, writing for the panel, observed that citing fictitious legal authorities—whether by AI or not—is as egregious as factual misrepresentation. "there is no justification for distinguishing, factually, between one form of deception and another. Deception that would justify the dismissal of a petition due to lack of clean hands—such deception, whether of this kind or that—is invalid in its essence; both forms demand proper judicial response. Their legal identity is the same."

Bunce v. Visual Technology Innovations, Inc. E.D. Pa. (USA) 27 February 2025 Lawyer ChatGPT
Fabricated Case Law (2)
Misrepresented Case Law (1)
Outdated Advice Overturned Case Law (2)
Monetary Sanction + Mandatory CLE on AI and Legal Ethics 2500 USD

AI Use

Counsel admitted using ChatGPT to draft two motions (Motion to Withdraw and Motion for Leave to Appeal), without verifying the cases or researching the AI tool’s reliability.

Hallucination Details

2 Fake cases:

  • McNally v. Eyeglass World, LLC, 897 F. Supp. 2d 1067 (D. Nev. 2012) — nonexistent
  • Behm v. Lockheed Martin Corp., 460 F.3d 860 (7th Cir. 2006) — nonexistent

Misused cases:

  • Degen v. United States, cited for irrelevant proposition
  • Dow Chemical Canada Inc. v. HRD Corp., cited despite later vacatur
  • Eavenson, Auchmuty Greenwald v. Holtzman, cited despite being overruled by Third Circuit precedent

Ruling/Sanction

The Court sanctioned Counsel $2,500 payable to the court and ordered him to complete at least one hour of CLE on AI and legal ethics. The opinion emphasized that deterrence applied both specifically to Counsel and generally to the profession.

Key Judicial Reasoning

Rule 11(b)(2) mandates reasonable inquiry into all legal contentions. No AI tool displaces the attorney’s personal duty. Novelty of AI tools is not a defense.

Yu Hon Tong Thomas v Centaline Property Agency High Court (Hong Kong) 26 February 2025 Pro Se Litigant Unidentified
Fabricated Exhibits or Submissions (1)
Misrepresented Case Law (1)
Merz v. Kalama W.D. Washington (USA) 25 February 2025 Pro Se Litigant Unidentified
Misrepresented Legal Norm (2)
Wadsworth v. Walmart (Morgan & Morgan) Wyoming (USA) 24 February 2025 Lawyer Internal tool (ChatGPT)
Fabricated Case Law (8)
$3k Fine + Pro Hac Vice Revoked (Drafter); $1k Fine each (Signers); Remedial actions noted. 5000 USD

AI Use

Counsel from Morgan & Morgan used the firm's internal AI platform (MX2.law, reportedly using ChatGPT) to add case law support to draft motions in limine in a product liability case concerning a hoverboard fire. This was reportedly his first time using AI for this purpose.

Hallucination Details

Eight out of nine case citations in the filed motions were non-existent or led to differently named cases. Another cited case number was real but belonged to a different case with a different judge. The legal standard description was also deemed "peculiar".

Ruling/Sanction

After defense counsel raised issues, the Judge issued an order to show cause. The plaintiffs' attorneys admitted the error, withdrew the motions, apologized, paid opposing counsel's fees related to the motions, and reported implementing new internal firm policies and training on AI use. Judge Rankin found Rule 11 violations. Sanctions imposed were: $3,000 fine on the drafter and revocation of his pro hac vice admission; $1,000 fine each on the signing attorneys for failing their duty of reasonable inquiry before signing.

Key Judicial Reasoning

The court acknowledged the attorneys' remedial steps and honesty but emphasized the non-delegable duty under Rule 11 to make a reasonable inquiry into the law before signing any filing. The court stressed that while AI can be a tool, attorneys remain responsible for verifying its output. The judge noted this was the "latest reminder to not blindly rely on AI platforms' citations".

Plonit v. Sharia Court of Appeals High Court (Israel) 23 February 2025 Lawyer Unidentified
Fabricated Case Law (1)
False Quotes Case Law (1)
Misrepresented Case Law (2), Legal Norm (1)
Petition Dismissed Outright; Warning re: Costs/Discipline.

AI Use

The petitioner’s counsel used an AI-based platform to draft the legal petition.

Hallucination Details

The petition cited 36 fabricated or misquoted Israeli Supreme Court rulings. Five references were entirely fictional, 14 had mismatched case details, and 24 included invented quotes. Upon judicial inquiry, counsel admitted reliance on an unnamed website recommended by colleagues, without verifying the information's authenticity. The Court concluded that the errors were likely the product of generative AI.

Ruling/Sanction

The High Court of Justice dismissed the petition on the merits, finding no grounds for intervention in the Sharia courts’ decisions. Despite the misconduct, no personal sanctions or fines were imposed on counsel, citing it as the first such incident to reach the High Court and adopting a lenient stance “far beyond the letter of the law.” However, the judgment was explicitly referred to the Court Administrator for system-wide attention.

Key Judicial Reasoning

The Court issued a stern warning about the ethical duties of lawyers using AI tools, underscoring that professional obligations of diligence, verification, and truthfulness remain intact regardless of technological convenience. The Court suggested that in future cases, personal sanctions on attorneys might be appropriate to protect judicial integrity.

Saxena v. Martinez-Hernandez et al. D. Nev. (USA) 18 February 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
False Quotes Case Law (1)
Complaint dismissed with prejudice; no formal AI-related sanction imposed, but dismissal explicitly acknowledged fictitious citations as contributing factor

AI Use

The plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them.

Hallucination Details

  • Spokane v. Douglass turned out to conflate unrelated decisions and misused citations from other cases
  • Hummel v. State could not be found in any Nevada or national database; citation matched an unrelated Oregon case

The court found no plausible explanation for these citations other than AI generation or outright fabrication.

Ruling/Sanction

The court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith.

Key Judicial Reasoning

Citing Morgan v. Cmty. Against Violence, the court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief".

Unnamed Brazilian litigant (Brazil) 18 February 2025 Lawyer ChatGPT Multiple fabricated case citations and doctrinal references Appeal partially granted (reintegration suspended, rent imposed), but litigant sanctioned for bad-faith litigation; 10% fine on the updated value of the case; copy of filing sent to OAB-SC for disciplinary review

AI Use

The appellant’s counsel admitted to having used ChatGPT, claiming the submission of false case law was the result of “unintentional use.” The fabricated citations were used in an appeal against a reintegration of possession order, in favor of the appellant’s stepmother and father’s heirs.

Hallucination Details

The brief contained numerous non-existent judicial precedents and references to legal doctrine that were either incorrect or entirely fictional. The court described them as “fabricated” and considered them serious enough to potentially mislead the court.

Ruling/Sanction

While the 6th Civil Chamber temporarily suspended the reintegration order, it further imposed a 10% fine on the value of the claim for bad-faith litigation and ordered that a copy of the appeal be forwarded to the Santa Catarina section of the Brazilian Bar Association (OAB/SC) for further investigation.

Key Judicial Reasoning

The court emphasized that the legal profession is a public calling entailing duties and responsibilities. It cautioned that AI must be used “with caution and restraint”. The chamber unanimously supported the sanction.

Geismayr v. The Owners, Strata Plan KAS 1970 Civil Resolution Tribunal (Canada) 14 February 2025 Pro Se Litigant Copilot
Fabricated Case Law (9)
Misrepresented Case Law (1)
Citations ignored
Goodchild v State of Queensland Queensland IRC (Australia) 13 February 2025 Pro Se Litigant "Internet searches"
Fabricated Case Law (5)
Relevant submissions ignored

"The Commission accepts the Applicant's explanation. Given that there appears to be significant doubt over whether the authorities cited by the Applicant represent actual decisions from the Fair Work Commission, I will give the authorities cited by the Applicant no weight in determining whether she has provided an explanation for the delay. This appears to be a salutary lesson for litigants in the dangers of relying on general search engines on the internet or artificial intelligence when preparing legal documents."

Luck v Commonwealth of Australia Federal Court (Australia) 11 February 2025
Fabricated Case Law (2)
The court dismissed the applicant's interlocutory application for disqualification and referral to a Full Court.
QWYN and Commissioner of Taxation Administrative Review Tribunal of Australia (Australia) 5 February 2025 Lawyer Copilot
False Quotes Doctrinal Work (1)
The Tribunal affirmed the decision under review, rejecting the applicant's submissions based on the AI-generated content.

"The Applicant engaged the Copilot [Microsoft’s Artificial Intelligence product] in a range of probing questions pertaining to superannuation and taxation matters, upon which in part, it returned the following responses:

The Explanatory Memorandum to the Taxation Laws Amendment (Superannuation) Bill 1992, which introduced the new regime taxing superannuation benefits, states in paragraph 2.20 that “the Bill will provide a tax rebate of 15 per cent for disability superannuation pensions. This will apply to all disability pensions, irrespective of whether they are paid from a taxed or an untaxed source. The rebate recognises that disability pensions are paid as compensation for the loss of earning capacity and are not merely a form of retirement income.

  1. I have examined the Explanatory Memorandum to the Taxation Laws Amendment (Superannuation) Bill 1992. I was unable to locate any paragraph in that document in the same or similar terms to the paragraph generated by Copilot. It did not contain a paragraph 2.20.
  2. It has been noted by others that AI bots are prone to hallucinations.[35] That appears to be what has happened here. It is my assessment that submitting unverified material generated by AI, is not consistent with a party’s duty to use their best endeavours to assist the Tribunal to achieve its statutory objectives. To expect the Tribunal to read and consider material which a party does not know is authentic impedes the Tribunal’s attempts to provide a mechanism of review that ensures that applications are resolved as quickly and with as little expense as a proper consideration of the issues permits.
  3. Nothing in the remainder of the applicant’s submissions altered my view that the untaxed element of the benefit should be taxed under Subdivision 301-B."
Valu v. Minister for Immigration and Multicultural Affairs (Australia) 31 January 2025 Lawyer ChatGPT
Fabricated Case Law (17)
False Quotes Exhibits or Submissions (8)
Referral to Legal Services Commissioner

AI Use

Counsel used ChatGPT to generate a summary of cases for a submission, which included fictitious Federal Court decisions and invented quotes from a Tribunal ruling. He inserted this output into the brief without verifying the sources. Counsel later admitted this under affidavit, citing time pressure, health issues, and unfamiliarity with AI's risks. He noted that guidance from the NSW Supreme Court was only published after the filing.

Hallucination Details

The 25 October 2024 submission cited at least 16 completely fabricated decisions (e.g. Murray v Luton [2001] FCA 1245, Bavinton v MIMA [2017] FCA 712) and included supposed excerpts from the AAT’s ruling that did not appear in the actual decision. The Court and Minister’s counsel were unable to verify any of the cited cases or quotes.

Ruling/Sanction

Judge Skaros ordered referral to the OLSC under the Legal Profession Uniform Law (NSW) 2014, noting breaches of rules 19.1 and 22.5 of the Australian Solicitors’ Conduct Rules. The Court accepted Counsel’s apology and health-related mitigation but found that the conduct fell short of professional standards and posed systemic risks given increasing AI use in legal practice.

Key Judicial Reasoning

While acknowledging that Counsel corrected the record and showed contrition, the Court found that the damage—including wasted judicial resources and delay to proceedings—had already occurred. The ex parte email submitting corrected materials, without notifying opposing counsel, further compounded the breach. Given the public interest in safeguarding the integrity of litigation amidst growing AI integration, referral to the OLSC was deemed necessary, even without naming Counsel in the judgment.

Gonzalez v. Texas Taxpayers and Research Association W.D. Texas (USA) 29 January 2025 Lawyer Lexis Nexis's AI
Fabricated Case Law (4)
Misrepresented Case Law (1)
Plaintiff's response was stricken and monetary sanctions were imposed. 3961 USD

In the case of Gonzalez v. Texas Taxpayers and Research Association, the court found that Plaintiff's counsel, John L. Pittman III, included fabricated citations, miscited cases, and misrepresented legal propositions in his response to a motion to dismiss. Pittman initially denied using AI but later admitted to using Lexis Nexis's AI citation generator. The court granted the defendant's motion to strike the plaintiff's response and imposed monetary sanctions on Pittman, requiring him to pay $3,852.50 in attorney's fees and $108.54 in costs to the defendant. The court deemed this an appropriate exercise of its inherent power due to the abundance of technical and substantive errors in the brief, which inhibited the defendant's ability to efficiently respond.

Hanna v Flinders University South Australia (Australia) 29 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Olsen v Finansiel Stabilitet High Court (UK) 25 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Exhibits or Submissions (2), Legal Norm (2)
No contempt, but might bear out on costs
Fora Financial Asset Securitization v. Teona Ostrov Public Relations NY SC (USA) 24 January 2025 Lawyer Implied
Fabricated Case Law (1)
False Quotes Case Law (1)
Misrepresented Case Law (1)
No sanction imposed; court struck the offending citations and warned that repeated occurrences may result in sanctions

AI Use

The court noted “problems with several citations leading to different or non-existent cases and a quotation that did not appear in any cases cited” in defendants’ reply papers. While the court did not identify AI explicitly, it flagged the issue and indicated that repeated infractions could lead to sanctions.

Ruling/Sanction

No immediate sanction. The court granted plaintiff’s motion in part, striking thirteen of eighteen affirmative defenses. It emphasized that if citation issues persist, sanctions will follow.

Body by Michael Pty Ltd and Industry Innovation and Science Australia Administrative Review Tribunal (Australia) 24 January 2025 Pro Se Litigant ChatGPT
Fabricated Case Law (1)
False Quotes Doctrinal Work (1)
Misrepresented Legal Norm (4)
Fake references withdrawn before the hearing

"Nevertheless, due to that withdrawal being requested prior to the hearing, I have not considered those paragraphs, these reasons for decision do not take account of those paragraphs and I merely make some general comments below applicable to all parties that appear before the Tribunal.

The use of Chat GPT is problematic for the Tribunal. It perhaps goes without saying that it is not acceptable for a party to attempt to mislead the Tribunal by citing case law that is non-existent or citing legal conclusions that do not follow, whether that attempt is deliberate or otherwise. All parties should be aware that the Tribunal checks and considers all cases and conclusions referred to in both parties’ submissions in any event. This matter would have inevitably been discovered, and adverse inferences may have been drawn. To ensure no such adverse inferences are drawn, parties are encouraged to use publicly available databases to search for case law and not to seek to rely on artificial intelligence."

Strike 3 Holdings LLC v. Doe C.D. California (USA) 22 January 2025 Lawyer Ulokued
Fabricated Case Law (3)

Key Judicial Reasoning

Magistrate Judge Sheri Pym found the motion legally deficient on multiple grounds. In addition, she emphasized that counsel must not rely on fake or unverified authority. She cited Mata, Park, Gauthier, and others as cautionary examples of courts imposing sanctions for AI-generated hallucinations. The court reaffirmed that the use of AI does not lessen the duty to verify the existence and relevance of cited law.

Arajuo v. Wedelstadt et al E.D. Wisconsin (USA) 22 January 2025 Lawyer Unidentified
Fabricated Case Law (1)
Warning

AI Use

Counsel admitted using a “new legal research medium”, appears to be a generative AI system or platform capable of generating fictitious case law. Counsel did not deny using AI, but claimed the system may have been corrupted or unreliable. The amended filing removed the false authorities.

Hallucination Details

The court did not identify the specific fake cases but confirmed that “citations to non-existent cases” were included in Defendants’ original brief. Counsel’s subsequent filing corrected the record but did not explain how the citations passed into the brief in the first place.

Ruling/Sanction

Judge William Griesbach denied the motion for summary judgment on the merits, but addressed the citation misconduct separately. He cited Rule 11 and Park v. Kim (91 F.4th 610, 615 (2d Cir. 2024)) to underline the duty to verify. No formal sanctions were imposed, but counsel was explicitly warned that further use of non-existent authorities would not be tolerated.

Key Judicial Reasoning

The court emphasized that even if the submission of false citations was not malicious, it was still a serious breach of Rule 11 obligations. Legal contentions must be “warranted by existing law,” and attorneys are expected to read and confirm cited cases. The failure to do so, even if caused by AI use, is unacceptable. The court accepted counsel’s corrective effort but insisted that future violations would be sanctionable.

Candice Dias v Angle Auto Finance Fair Work Commission (Australia) 20 January 2025 Pro Se Litigant Implied
Fabricated Case Law (3)
Misrepresented Case Law (1)
United States v. Hayes E.D. Cal. (USA) 17 January 2025 Federal Defender Unidentified One fake case citation with fabricated quotation Formal Sanction Imposed + Written Reprimand

AI Use

Defense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible.

Hallucination Details

Francisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere.

Ruling/Sanction

The Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted.

Key Judicial Reasoning

The court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record.

Source: Volokh
Strong v. Rushmore Loan Management Services D. Nebraska (USA) 15 January 2025 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (1)
Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions
Kohls v. Ellison Minnesota (USA) 10 January 2025 Expert GPT-4o Fake Academic Citations Expert Declaration Excluded

AI Use

Professor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections.

Hallucination Details

The declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations.

Ruling/Sanction

Professor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable.

Key Judicial Reasoning

The court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted.

Source: Volokh
O’Brien v. Flick and Chamberlain S.D. Florida (USA) 10 January 2025 Pro Se Litigant Implied
Fabricated Case Law (2)
Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations

AI Use

Although O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.”

Ruling/Sanction

The court dismissed the case with prejudice on dual grounds:

  • The claims should have been raised as compulsory counterclaims in prior pending litigation and were thus procedurally barred under Rule 13(a)
  • O’Brien submitted fake legal citations, failed to acknowledge the issue candidly, violated local rules, and engaged in a pattern of procedural misconduct in this and other related litigation. While monetary sanctions were not imposed, the court granted the motion to strike and ordered dismissal with prejudice as both substantive and disciplinary remedy.

Key Judicial Reasoning

Judge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers.

Mavundla v. MEC High Court (South Africa) 8 January 2025 Lawyer Implied
Fabricated Case Law (9)
Misrepresented Case Law (4), Legal Norm (2)
Leave for appel dismissed with costs; referral to Legal Practice Council

AI Use

The judgment does not explicitly confirm that generative AI was used, but the judge strongly suspects ChatGPT or a similar tool was the source. The judge even ran prompts into ChatGPT and confirmed that the tool responded with fabricated support for the same fake cases used in the submission. Counsel blamed overwork and delegation to a candidate attorney (Ms. Farouk), who denied AI use but gave vague and evasive answers.

Hallucination Details

Fabricated or misattributed cases included:

  • Pieterse v. The Public Protector (no such case exists at cited location)
  • Burgers v. The Executive Committee..., Dube v. Schleich, City of Cape Town v. Aon SA, Makro Properties v. Raal, Standard Bank v. Lethole — none found in SAFLII or major reporters
  • Citations were often invented or misattributed to irrelevant decisions (e.g., a Competition Tribunal merger approval cited as support for service rules)

The supplementary notice of appeal included misleading summaries with no accurate paragraph citations, and no proper authority was ever provided for key procedural points.

Ruling/Sanction

  • Application for leave to appeal dismissed in full
  • Legal representatives ordered to pay costs of the 22 and 25 September 2024 appearances de bonis propriis
  • Judgment referred to the Legal Practice Council
  • Judge emphasized that the conduct went beyond the leniency shown in Parker v. Forsyth, as it involved unverified submissions in a signed court filing and then doubling down during oral argument.

Key Judicial Reasoning

Justice Bezuidenhout issued a lengthy and stern warning on the professional obligation to verify authorities. She held that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional,” and emphasized that even ignorance of AI’s flaws does not excuse unethical conduct. The judgment discusses comparative standards, ethical obligations, and recent literature in detail.

Buckeye Trust v. PCIT (India) 30 December 2024 Judge Implied
Misrepresented Case Law (2), Legal Norm (2)
Outdated Advice Repealed Law (1)
Judgment was retracted and case re-heard

Seemingly, the judge cited back hallucinated authorities invoked by one counsel. The Judgment was later reportedly withdrawn.

Al-Hamim v. Star Hearthstone Colorado (USA) 26 December 2024 Pro Se Litigant Unidentified
Fabricated Case Law (8)
No Sanction (due to pro se, contrition, etc.); Warning of future sanctions.

AI Use

Alim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court.

Hallucination Details

The appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause.

Ruling/Sanction

Al-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions.

Key Judicial Reasoning

Factors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status.

Duarte v. City of Richmond British Columbia Human Rights Tribunal (Canada) 18 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Warning

Nathan Duarte, a pro se litigant, filed a complaint against the City of Richmond alleging discrimination based on political beliefs. During the proceedings, Duarte cited three cases to support his claim that union affiliation is a protected characteristic. However, neither the City nor the Tribunal could locate these cases, leading to the suspicion that they were fabricated, possibly by a generative AI tool. The court held:

"While it is not necessary for me to determine if Mr. Duarte intended to mislead the Tribunal, I cannot rely on these “authorities” he cites in his submission. At the very least, Mr. Duarte has not followed the Tribunal’s Practice Direction for Legal Authorities, which requires parties, if possible, to provide a neutral citation so other participants can access a copy of the authority without cost. Still, I am compelled to issue a caution to parties who engage the assistance of generative AI technology while preparing submissions to the Tribunal, in case that is what occurred here. AI tools may have benefits. However, such applications have been known to create information, including case law, which is not derived from real or legitimate sources. It is therefore incumbent on those using AI tools to critically assess the information that it produces, including verifying the case citations for accuracy using legitimate sources. Failure to do so can have serious consequences. For lawyers, such errors have led to disciplinary action by the Law Society: see for example, Zhang v Chen, 2024 BCSC 285. Deliberate attempts to mislead the Tribunal, or even careless submission of fabricated information, could also form the basis for an award of costs under s. 37(4) of the Code. The integrity of the Tribunal’s process, and the justice system more broadly, requires parties to exercise diligence in ensuring that their engagement with artificial intelligence does not supersede their own judgement and credibility."

Letts v. Avidien Technologies E.D. N. Carolina (USA) 16 December 2024 Pro Se Litigant Implied
Fabricated Case Law (1)
Misrepresented Case Law (2)
Warning
Hamdan v. the National Insurance Institute Magistrate Court (Israel) 12 December 2024 Lawyer Unidentified
Fabricated Case Law (4)
Misrepresented Case Law (1)
Petition dismissed; ₪1,000 costs imposed for procedural misconduct and reliance on fictitious case law 1000 ILS

AI Use

Counsel admitted the fictitious citations originated from an “online legal database commonly used by lawyers.” Though the platform is unnamed, the court ruled out the standard legal database Nevo and concluded the “source of the hallucination is unclear.” Counsel apologized and claimed no intent to mislead.

Hallucination Details

The motion cited ten fabricated decisions—each with full party names, court locations, file numbers, and dates—purportedly showing that indirect child support debts owed to the National Insurance Institute could be discharged in bankruptcy. The court could not find a single one in any judicial database and ordered counsel to produce them. When he failed, he admitted they were inauthentic. The only real cited case (Skok) did not support the petitioner’s position.

Ruling/Sanction

The court dismissed the petition after finding that: (i) the cited decisions were fabricated; (ii) the only valid case did not support the argument; and (iii) under Israel’s Bankruptcy Ordinance, child support debts are not dischargeable by default. Despite the state’s failure to respond, the judge ruled sua sponte and imposed ₪1,000 in costs for procedural abuse.

Key Judicial Reasoning

Judge Saharai held that even if the hallucinated cases were cited inadvertently, their submission constituted a grave failure to meet professional obligations. He emphasized that a court cannot function when presented with legal fictions dressed up as precedent. The decision cited the attorney’s duty under section 54 of the Bar Law (1961) and ethics rules 2 and 34.