This database tracks legal decisions1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that a party relied on hallucinated content or material.
As an exception, the database also covers some judicial decisions where AI use was alleged but not confirmed. This is a judgment call on my part.
in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (979 cases identified so far), it is a work in progress and will expand as new examples emerge. This database has been featured in news media, and indeed in several decisions dealing with hallucinated material.2
Examples of media coverage include:
- M. Hiltzik, AI 'hallucinations' are a growing problem for the legal profession (LA Times, 22 May 2025)
- E. Volokh, "AI Hallucination Cases," from Courts All Over the World (Volokh Conspiracy, 18 May 2025)
- J-.M. Manach, "Il génère des plaidoiries par IA, et en recense 160 ayant « halluciné » depuis 2023" (Next, 1 July 2025)
- J. Koebler & J. Roscoe, "18 Lawyers Caught Using AI Explain Why They Did It (404 Media, 30 September 2025)
If you know of a case that should be included, feel free to contact me.3 (Readers may also be interested in this project regarding AI use in academic papers.)
Based on this database, I have developped an automated reference checker that also detects hallucinations: PelAIkan. Check the Reports
in the database for examples, and reach out to me for a demo !
For weekly takes on cases like these, and what they mean for legal practice, subscribe to Artificial Authority.
| Case | Court / Jurisdiction | Date ▼ | Party Using AI | AI Tool ⓘ | Nature of Hallucination | Outcome / Sanction | Monetary Penalty | Details | Report(s) |
|---|---|---|---|---|---|---|---|---|---|
| Fora Financial Asset Securitization v. Teona Ostrov Public Relations | NY SC (USA) | 24 January 2025 | Lawyer | Implied |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1)
|
No sanction imposed; court struck the offending citations and warned that repeated occurrences may result in sanctions | — | — | |
AI UseThe court noted “problems with several citations leading to different or non-existent cases and a quotation that did not appear in any cases cited” in defendants’ reply papers. While the court did not identify AI explicitly, it flagged the issue and indicated that repeated infractions could lead to sanctions. Ruling/SanctionNo immediate sanction. The court granted plaintiff’s motion in part, striking thirteen of eighteen affirmative defenses. It emphasized that if citation issues persist, sanctions will follow. |
|||||||||
| Body by Michael Pty Ltd and Industry Innovation and Science Australia | Administrative Review Tribunal (Australia) | 24 January 2025 | Pro Se Litigant | ChatGPT |
Fabricated
Case Law
(1)
False Quotes
Doctrinal Work
(1)
Misrepresented
Legal Norm
(4)
|
Fake references withdrawn before the hearing | — | — | |
|
"Nevertheless, due to that withdrawal being requested prior to the hearing, I have not considered those paragraphs, these reasons for decision do not take account of those paragraphs and I merely make some general comments below applicable to all parties that appear before the Tribunal. The use of Chat GPT is problematic for the Tribunal. It perhaps goes without saying that it is not acceptable for a party to attempt to mislead the Tribunal by citing case law that is non-existent or citing legal conclusions that do not follow, whether that attempt is deliberate or otherwise. All parties should be aware that the Tribunal checks and considers all cases and conclusions referred to in both parties’ submissions in any event. This matter would have inevitably been discovered, and adverse inferences may have been drawn. To ensure no such adverse inferences are drawn, parties are encouraged to use publicly available databases to search for case law and not to seek to rely on artificial intelligence." |
|||||||||
| Strike 3 Holdings LLC v. Doe | C.D. California (USA) | 22 January 2025 | Lawyer | Ulokued |
Fabricated
Case Law
(3)
|
— | — | ||
Key Judicial ReasoningMagistrate Judge Sheri Pym found the motion legally deficient on multiple grounds. In addition, she emphasized that counsel must not rely on fake or unverified authority. She cited Mata, Park, Gauthier, and others as cautionary examples of courts imposing sanctions for AI-generated hallucinations. The court reaffirmed that the use of AI does not lessen the duty to verify the existence and relevance of cited law. |
|||||||||
| Arajuo v. Wedelstadt et al | E.D. Wisconsin (USA) | 22 January 2025 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
|
Warning | — | — | |
AI UseCounsel admitted using a “new legal research medium”, appears to be a generative AI system or platform capable of generating fictitious case law. Counsel did not deny using AI, but claimed the system may have been corrupted or unreliable. The amended filing removed the false authorities. Hallucination DetailsThe court did not identify the specific fake cases but confirmed that “citations to non-existent cases” were included in Defendants’ original brief. Counsel’s subsequent filing corrected the record but did not explain how the citations passed into the brief in the first place. Ruling/SanctionJudge William Griesbach denied the motion for summary judgment on the merits, but addressed the citation misconduct separately. He cited Rule 11 and Park v. Kim (91 F.4th 610, 615 (2d Cir. 2024)) to underline the duty to verify. No formal sanctions were imposed, but counsel was explicitly warned that further use of non-existent authorities would not be tolerated. Key Judicial ReasoningThe court emphasized that even if the submission of false citations was not malicious, it was still a serious breach of Rule 11 obligations. Legal contentions must be “warranted by existing law,” and attorneys are expected to read and confirm cited cases. The failure to do so, even if caused by AI use, is unacceptable. The court accepted counsel’s corrective effort but insisted that future violations would be sanctionable. |
|||||||||
| Candice Dias v Angle Auto Finance | Fair Work Commission (Australia) | 20 January 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(3)
Misrepresented
Case Law
(1)
|
— | — | ||
| United States v. Hayes | E.D. Cal. (USA) | 17 January 2025 | Federal Defender | Unidentified | One fake case citation with fabricated quotation | Formal Sanction Imposed + Written Reprimand | — | — | |
AI UseDefense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible. Hallucination DetailsFrancisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere. Ruling/SanctionThe Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted. Key Judicial ReasoningThe court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record. |
|||||||||
|
Source: Volokh
|
|||||||||
| Strong v. Rushmore Loan Management Services | D. Nebraska (USA) | 15 January 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Motion to dismiss granted; no sanctions imposed, but court warned that repetition could result in sanctions or filing restrictions | — | — | |
| Kohls v. Ellison | Minnesota (USA) | 10 January 2025 | Expert | GPT-4o | Fake Academic Citations | Expert Declaration Excluded | — | — | |
AI UseProfessor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections. Hallucination DetailsThe declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations. Ruling/SanctionProfessor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable. Key Judicial ReasoningThe court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted. |
|||||||||
|
Source: Volokh
|
|||||||||
| O’Brien v. Flick and Chamberlain | S.D. Florida (USA) | 10 January 2025 | Pro Se Litigant | Implied |
Fabricated
Case Law
(2)
|
Case dismissed with prejudice, inter alia for use of fake citations and misrepresentations | — | — | |
AI UseAlthough O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.” Ruling/SanctionThe court dismissed the case with prejudice on dual grounds:
Key Judicial ReasoningJudge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers. |
|||||||||
| Mavundla v. MEC | High Court (South Africa) | 8 January 2025 | Lawyer | Implied |
Fabricated
Case Law
(9)
Misrepresented
Case Law
(4),
Legal Norm
(2)
|
Leave for appel dismissed with costs; referral to Legal Practice Council | — | — | |
AI UseThe judgment does not explicitly confirm that generative AI was used, but the judge strongly suspects ChatGPT or a similar tool was the source. The judge even ran prompts into ChatGPT and confirmed that the tool responded with fabricated support for the same fake cases used in the submission. Counsel blamed overwork and delegation to a candidate attorney (Ms. Farouk), who denied AI use but gave vague and evasive answers. Hallucination DetailsFabricated or misattributed cases included:
The supplementary notice of appeal included misleading summaries with no accurate paragraph citations, and no proper authority was ever provided for key procedural points. Ruling/Sanction
Key Judicial ReasoningJustice Bezuidenhout issued a lengthy and stern warning on the professional obligation to verify authorities. She held that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional,” and emphasized that even ignorance of AI’s flaws does not excuse unethical conduct. The judgment discusses comparative standards, ethical obligations, and recent literature in detail. |
|||||||||
| Buckeye Trust v. PCIT | (India) | 30 December 2024 | Judge | Implied |
Misrepresented
Case Law
(2),
Legal Norm
(2)
Outdated Advice
Repealed Law
(1)
|
Judgment was retracted and case re-heard | — | — | |
|
Seemingly, the judge cited back hallucinated authorities invoked by one counsel. The Judgment was later reportedly withdrawn. |
|||||||||
| Al-Hamim v. Star Hearthstone | Colorado (USA) | 26 December 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(8)
|
No Sanction (due to pro se, contrition, etc.); Warning of future sanctions. | — | — | |
AI UseAlim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court. Hallucination DetailsThe appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause. Ruling/SanctionAl-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions. Key Judicial ReasoningFactors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status. |
|||||||||
| Duarte v. City of Richmond | British Columbia Human Rights Tribunal (Canada) | 18 December 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
Warning | — | — | |
|
Nathan Duarte, a pro se litigant, filed a complaint against the City of Richmond alleging discrimination based on political beliefs. During the proceedings, Duarte cited three cases to support his claim that union affiliation is a protected characteristic. However, neither the City nor the Tribunal could locate these cases, leading to the suspicion that they were fabricated, possibly by a generative AI tool. The court held: "While it is not necessary for me to determine if Mr. Duarte intended to mislead the Tribunal, I cannot rely on these “authorities” he cites in his submission. At the very least, Mr. Duarte has not followed the Tribunal’s Practice Direction for Legal Authorities, which requires parties, if possible, to provide a neutral citation so other participants can access a copy of the authority without cost. Still, I am compelled to issue a caution to parties who engage the assistance of generative AI technology while preparing submissions to the Tribunal, in case that is what occurred here. AI tools may have benefits. However, such applications have been known to create information, including case law, which is not derived from real or legitimate sources. It is therefore incumbent on those using AI tools to critically assess the information that it produces, including verifying the case citations for accuracy using legitimate sources. Failure to do so can have serious consequences. For lawyers, such errors have led to disciplinary action by the Law Society: see for example, Zhang v Chen, 2024 BCSC 285. Deliberate attempts to mislead the Tribunal, or even careless submission of fabricated information, could also form the basis for an award of costs under s. 37(4) of the Code. The integrity of the Tribunal’s process, and the justice system more broadly, requires parties to exercise diligence in ensuring that their engagement with artificial intelligence does not supersede their own judgement and credibility." |
|||||||||
| Letts v. Avidien Technologies | E.D. N. Carolina (USA) | 16 December 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(2)
|
Warning | — | — | |
| Hamdan v. the National Insurance Institute | Magistrate Court (Israel) | 12 December 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1)
|
Petition dismissed; ₪1,000 costs imposed for procedural misconduct and reliance on fictitious case law | 1000 ILS | — | |
AI UseCounsel admitted the fictitious citations originated from an “online legal database commonly used by lawyers.” Though the platform is unnamed, the court ruled out the standard legal database Nevo and concluded the “source of the hallucination is unclear.” Counsel apologized and claimed no intent to mislead. Hallucination DetailsThe motion cited ten fabricated decisions—each with full party names, court locations, file numbers, and dates—purportedly showing that indirect child support debts owed to the National Insurance Institute could be discharged in bankruptcy. The court could not find a single one in any judicial database and ordered counsel to produce them. When he failed, he admitted they were inauthentic. The only real cited case (Skok) did not support the petitioner’s position. Ruling/SanctionThe court dismissed the petition after finding that: (i) the cited decisions were fabricated; (ii) the only valid case did not support the argument; and (iii) under Israel’s Bankruptcy Ordinance, child support debts are not dischargeable by default. Despite the state’s failure to respond, the judge ruled sua sponte and imposed ₪1,000 in costs for procedural abuse. Key Judicial ReasoningJudge Saharai held that even if the hallucinated cases were cited inadvertently, their submission constituted a grave failure to meet professional obligations. He emphasized that a court cannot function when presented with legal fictions dressed up as precedent. The decision cited the attorney’s duty under section 54 of the Bar Law (1961) and ethics rules 2 and 34. |
|||||||||
| Mojtabavi v. Blinken | C.D. California (USA) | 12 December 2024 | Pro Se Litigant | Unidentified | Multiple fake cases | Case dismissed with prejudice | — | — | |
| John Coulsto et al. v Elliott | The High Court (Ireland) | 10 December 2024 | Pro Se Litigant | implied |
Outdated Advice
Repealed Law
(1)
|
Court rejected the submission as fallacious | — | — | |
|
Defendants' written submissions (not argued at trial) advanced that s.19 of the Conveyancing Act 1881 had been repealed by the 2009 Act, undermining the power to appoint a receiver. The court found the argument fallacious, noted s.19 was reinstated by the 2013 Act, and observed the submissions were likely produced by a generative AI or an unqualified adviser. |
|||||||||
| Crypto Open Patent Alliance v. Wright (1) | High Court (UK) | 6 December 2024 | Pro Se Litigant | Unknown |
Fabricated
Case Law
(1),
Exhibits or Submissions
(1)
False Quotes
Case Law
(1)
Misrepresented
Case Law
(1),
Exhibits or Submissions
(1)
|
No formal sanction; fabricated citations disregarded | — | — | |
AI UseDr. Wright, representing himself, submitted numerous case citations in support of an application for remote attendance at an upcoming contempt hearing. COPA demonstrated that most of the authorities cited did not contain the quoted language—or were entirely unrelated. The judge agreed, noting these were likely "AI hallucinations by ChatGPT." Later on, the Court of Appeal declined permission to appeal (finding that "Dr Wright’s grounds of appeal, skeleton argument and summary of skeleton argument themselves contain multiple falsehoods, including reliance upon fictitious authorities such as “Anderson v the Queen [2013] UKPC 2” which appear to be AI-generated hallucinations"). This led the Court to order him to pay costs of 100,000 GBP. |
|||||||||
| Carlos E. Gutierrez v. In Re Noemi D. Gutierrez | Fl. 3rd District CA (USA) | 4 December 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
False Quotes
Case Law
(1)
|
Appeals dismissed as sanction; Appellant barred from future pro se filings in related probate matters without attorney signature | — | — | |
AI UseThe court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.” Hallucination DetailsThe court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated. Ruling/SanctionDismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings Key Judicial ReasoningThe Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations. |
|||||||||
| Rubio v. District of Columbia DHS | D.C. DC (USA) | 3 December 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(1)
|
Complaint dismissed with prejudice; no Rule 11 sanctions imposed, but clear judicial warning on AI misuse and citation verification duties | — | — | |
AI UsePlaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).” Hallucination DetailsThe court could not locate the following cited cases:
These were used to allege a pattern of constitutional violations by the District but were found to be fabricated. Ruling/SanctionThe court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse. Key Judicial ReasoningThe Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice. |
|||||||||
| Gauthier v. Goodyear Tire & Rubber Co. | E.D. Tex. (USA) | 25 November 2024 | Lawyer | Claude |
Fabricated
Case Law
(2)
False Quotes
Case Law
(7)
|
Monetary fine + Mandatory AI-related CLE Course + Disclosure to Client | 2000 USD | — | |
AI UseMonk admitted using the Claude AI tool to draft a summary judgment opposition without adequately verifying the case citations or quotations. He later claimed to have attempted post-hoc verification through Lexis AI but did not correct the errors until after a judicial show cause order. Hallucination DetailsCited two completely nonexistent cases. Also fabricated quotations attributed to real cases, including Morales v. SimuFlite, White v. FCI USA, Burton v. Freescale, among others. Several "quotes" did not appear anywhere in the cited opinions. Ruling/SanctionThe court imposed a $2,000 fine, ordered Monk to complete at least one hour of CLE on generative AI in legal practice, and mandated formal disclosure of the sanctions order to his client. It also permitted amendment of the defective filing but warned of the severity of the misconduct. Key Judicial ReasoningThe court emphasized that attorneys remain personally responsible for the verification of all filings under Rule 11, regardless of technology used. Use of AI does not dilute the duty of candor. Continued silence and failure to rectify errors after opposing counsel flagged them exacerbated the misconduct. |
|||||||||
| Leslie v. IQ Data International | N.D. Georgia (USA) | 24 November 2024 | Pro Se Litigant | Implied | Citation to nonexistent authorities | Background action dismissed with prejudice, but no monetary sanction | — | — | |
| Wikeley v Kea Investments Ltd | (New Zealand) | 21 November 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
Referred to guidance about AI | — | — | |
| Monster Energy Company v. Pacific Smoke International Inc. | Canadian Intellectual Property Office (Canada) | 20 November 2024 | Lawyer |
Fabricated
Case Law
(1)
|
The fabricated citation was disregarded by the court. | — | — | ||
|
In a trademark opposition case between Monster Energy Company and Pacific Smoke International Inc., the Applicant, Pacific Smoke, cited a non-existent case, 'Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7', in support of its argument. This was identified as an AI hallucination by the court. The court disregarded this citation and reminded the Applicant of the seriousness of relying on false citations, whether accidental or AI-generated. |
|||||||||
| Berry v. Stewart | D. Kansas (USA) | 14 November 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(1),
Exhibits or Submissions
(1)
|
At hearing, Counsel pledged to reimburse other side and his client | — | — | |
|
In the November 2024 Show Cause Order, Judge Robinson noted that: "First, the briefing does not cite the forum-selection clause from the contract between the parties; instead, it cites and quotes a forum-selection clause that appears nowhere in the papers submitted by the parties. Second, Defendant’s reply brief includes a citation, Hogan v. Allstate Insurance Co., No. 19-CV-00262-JPM, 2020 WL 1882334 (D. Kan. Apr. 15, 2020), in which the court purportedly “transferred a case to the Southern District of Texas because the majority of the witnesses were located in Texas. The court found that the burden on the witnesses outweighed the convenience of litigating the case in Kansas.” As far as the Court can tell, this case does not exist. The Westlaw database number pulls up no case; the Court has found no case in CM/ECF between the parties “Hogan” and “Allstate Insurance Co.” Moreover, docket numbers in this district have at least four digits—not three—after the case-type designation, and there is no judge in this district with the initials “JPM.”" During the show cause hearing (Transcript), Counsel apologised and pledged to reimburse the other side's costs, as well as his client's. |
|||||||||
| Kaur v RMIT | SC Victoria (CA) (Australia) | 11 November 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
|
— | — | ||
| Vargas v. Salazar | S.D. Texas (USA) | 1 November 2024 | Pro Se Litigant | Implied | Fake citations | Plaintiff ordered to refile submissions without fake citations | — | — | |
| Churchill Funding v. 732 Indiana | SC Cal (USA) | 31 October 2024 | Lawyer | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1),
Legal Norm
(1)
|
Order to show cause | — | — | |
|
Source: Volokh
|
|||||||||
| Mortazavi v. Booz Allen Hamilton, Inc. | C.D. Cal. (USA) | 30 October 2024 | Lawyer | Unidentified |
Fabricated
Case Law
(1)
False Quotes
Exhibits or Submissions
(1)
|
$2,500 Monetary Sanction + Mandatory Disclosure to California State Bar | — | — | |
AI UsePlaintiff’s counsel admitted using generative AI to draft a motion to remand without independently verifying the legal citations or the factual accuracy of quoted complaint allegations. Hallucination DetailsCited a fabricated case (details of the specific case name not listed in the ruling). Included fabricated quotations from the complaint, suggesting nonexistent factual allegations. Ruling/SanctionThe Court imposed a $2,500 sanction payable by December 30, 2024. Counsel was also required to notify the California State Bar of the sanction and file proof of notification and payment. The Court recognized mitigating factors (health issues, post-hoc corrective measures) but stressed the seriousness of the violations. Key Judicial ReasoningRule 11 requires attorneys to conduct a reasonable inquiry into both facts and law. Use of AI does not diminish this duty. Subjective good faith is irrelevant: violations occur even without intent to deceive. AI-generated filings must be reviewed with the same rigor as traditional submissions. |
|||||||||
| Thomas v. Commissioner of Internal Revenue | United States Tax Court (USA) | 23 October 2024 | Lawyer, Paralegal | Implied |
Misrepresented
Case Law
(3)
|
Pretrial Memorandum stricken | — | — | |
|
The lawyer for the petitioner admitted to not reviewing the memorandum, which was prepared by a paralegal. The court deemed the Pretrial Memorandum stricken but did not impose a monetary penalty, considering the economic situation of the petitioner and the lawyer's service to a client who might otherwise be unrepresented. It was also pertinent that the law being stated was accurate (even if the citations were wrong). |
|||||||||
| Matter of Weber | NY County Court (USA) | 10 October 2024 | Expert | MS Copilot | Unverifiable AI Calculation Process | AI-assisted Evidence Inadmissible; Affirmative Duty to Disclose AI Use for Evidence Established. | — | — | |
AI UseIn a trust accounting proceeding, the objectant's damages expert testified that he used Microsoft Copilot (described as an AI chatbot) to cross-check his damages calculations presented in a supplemental report. Hallucination DetailsThe issue wasn't fabricated citations, but the reliability and verifiability of the AI's calculation process. The expert could not recall the specific prompts used, nor could he explain Copilot's underlying sources or methodology. He claimed using AI tools was generally accepted in his field but offered no proof. Ruling/SanctionThe court had already found the expert's analysis unreliable on other grounds, but specifically addressed the AI use. The court attempted to replicate the expert's results using Copilot itself, obtaining different outputs and eliciting warnings from Copilot about the need for expert verification before court use. The court held, potentially as an issue of first impression in that court, that counsel has an affirmative duty to disclose the use of AI in generating evidence prior to its introduction, due to AI's rapid evolution and reliability issues. AI-generated evidence would be subject to a Frye hearing (standard for admissibility of scientific evidence in NY). The expert's AI-assisted calculations were deemed inadmissible. Key Judicial ReasoningThe court emphasized the "garbage in, garbage out" principle, stressing the need for users to understand AI inputs and processes. It stated that the mere fact AI is used does not make its output admissible; reliability must be established. The lack of transparency regarding the AI's process was a key factor in finding the evidence unreliable. |
|||||||||
| Iovino v. Michael Stapleton Associates, Ltd. | Western Virginia (USA) | 10 October 2024 | Lawyer | Claude, Westlaw, LexisNexis |
Fabricated
Case Law
(2)
False Quotes
Case Law
(2)
Misrepresented
Case Law
(1)
|
No sanction, but hearing transcript sent to bar authorities | — | — | |
|
Show cause order is here. Counsel responded to Show Cause order in this document. Show cause hearing transcript is here. |
|||||||||
| Jones v. Simploy | Missouri CA (USA) | 24 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
|
The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court. We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54. We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity." |
|||||||||
| Martin v. Hawai | D. Hawaii (USA) | 20 September 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(2)
False Quotes
Case Law
(2)
Misrepresented
Legal Norm
(2)
|
Warning, and Order to file further submissions with Declaration | — | — | |
| Anonymous Spanish Lawyer | Tribunal Constitucional (Spain) | 9 September 2024 | Lawyer | Unidentified | 19 fabricated Constitutional Court decisions | Formal Reprimand (Apercibimiento) + Referral to Barcelona Bar for Disciplinary Action | — | — | |
AI UseThe Court noted that the false citations could stem from AI, disorganized database use, or invention. Counsel claimed a database error but provided no evidence. The Court found the origin irrelevant: verification duty lies with the submitting lawyer. Hallucination DetailsNineteen separate fabricated citations to fictional Constitutional Court judgments. Fake quotations falsely attributed to those nonexistent decisions. Cited to falsely bolster claims of constitutional relevance in an amparo. Ruling/SanctionThe Constitutional Court unanimously found that the inclusion of nineteen fabricated citations constituted a breach of the respect owed to the Court and its judges under Article 553.1 of the Spanish Organic Law of the Judiciary. Issued a formal warning (apercibimiento) rather than a fine due to absence of prior offenses. Referred the matter to the Barcelona Bar for possible disciplinary proceedings Key Judicial ReasoningThe Court stressed that even absent express insults, fabricating authority gravely disrespects the judiciary’s function. Irrespective of whether AI was used or a database error occurred, the professional duty of diligent verification was breached. The Court noted that fake citations disrupt the court’s work both procedurally and institutionally. |
|||||||||
| Transamerica Life v. Williams | D. Arizona (USA) | 6 September 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(4)
Misrepresented
Legal Norm
(1)
|
Warning | — | — | |
| Rule v. Braiman | N.D. New York (USA) | 4 September 2024 | Pro Se Litigant | Implied | Fake citations | Warning | — | — | |
| ATSJ NA 38/2024 | TSJ Navarra (Spain) | 4 September 2024 | Lawyer | CHATGPT 3 |
Fabricated
Legal Norm
(1)
|
— | — | ||
| USA v. Michel | D.C. (USA) | 30 August 2024 | Lawyer | EyeLevel |
False Quotes
Exhibits or Submissions
(1)
|
Misattribution was irrelevant | — | — | |
|
As acknowledged by Counsel, he also used AI to generate parts of his pleadings. |
|||||||||
| In re Dayal | (Australia) | 27 August 2024 | Lawyer | LEAP |
Fabricated
Case Law
(1)
|
Referral to the Victorian Legal Services Board and Commissioner for potential disciplinary review; no punitive order issued by the court itself; apology accepted. | — | — | |
|
Counsel admitted the list of authorities and accompanying summaries were generated by an AI research module embedded in his legal practice software. He stated he did not verify the content before submitting it. The judge found that neither Counsel nor any other legal practitioner at his firm had checked the validity of the generated output. The court accepted Counsel’s unconditional apology, noted remedial steps, and acknowledged his cooperation and candour. However, it nonetheless referred the matter to the Office of the Victorian Legal Services Board and Commissioner under s 30 of the Legal Profession Uniform Law Application Act 2014 (Vic) for independent assessment. The referral was explicitly framed as non-punitive and in the public interest. In September 2025, the Board sanctioned Counsel, preventing him from acting as a principal lawyer or operate his own practice, and put him down for two years of supervision (see here). |
|||||||||
| Rasmussen v. Rasmussen | California (USA) | 23 August 2024 | Lawyer | Implied |
Fabricated
Case Law
(4)
Misrepresented
Case Law
(4)
|
Lawyer ordered to show cause why she should not be referred to the bar | — | — | |
|
While the Court initially organised show cause proceedings leading to potential sanctions, the case was eventually settled. Nevertheless, the Court stated that it "intends to report Ms. Rasmussen’s use of mis-cited and nonexistent cases in the demurrer to the State Bar", unless she objected to "this tentative ruling". |
|||||||||
| N.E.W. Credit Union v. Mehlhorn | Wisconsin C.A. (USA) | 13 August 2024 | Pro Se Litigant | Implied | At least four fictitious cases | Warning | — | — | |
|
The court pointed out: "In its brief, the Credit Union points out that the cases cited by Mehlhorn do not exist and speculates that Mehlhorn used an artificial intelligence program to draft her brief-in-chief. In her reply brief, Mehlhorn does not respond to this assertion. Instead, she cites eight new cases, none of which were referenced in her brief-in-chief. It appears, however, that four of those cases are also fictitious. At a minimum, this court cannot locate those cases using the citations provided. We strongly admonish Mehlhorn for her violations of the Rules of Appellate procedure, and particularly for her citations to what appear to be fictitious cases. Although Mehlhorn is self-represented, pro se appellants “are bound by the same rules that apply to attorneys on appeal.” See Waushara County v. Graf, 166 Wis. 2d 442, 452, 480 N.W.2d 16 (1992). We could summarily dismiss this appeal as a sanction for Mehlhorn’s multiple and egregious rule violations. See WIS. STAT. RULE 809.83(2). Nevertheless, we choose to address the merits of Mehlhorn’s arguments as best as we are able, given the deficiencies in her briefing" |
|||||||||
| Nitzan v. Adar BaEmakim Properties Ltd. | Magistrate Court (Israel) | 13 August 2024 | Lawyer | Implied |
Fabricated
Case Law
(4)
False Quotes
Legal Norm
(1)
Misrepresented
Case Law
(5)
|
Matter referred to the Legal Department of the Court Administration | — | — | |
|
In response to a motion by the defendant (Adar BaEmakim Properties Ltd.), the plaintiff's counsel submitted a response that included several purported quotations from Israeli Supreme Court decisions to support his arguments. Judge Daniel Kirs discovered that these citations were problematic: party names did not match case numbers, decision dates were incorrect, and one cited judge was incorrect. Crucially, the quoted text did not appear in the actual decisions, even when counsel was ordered to and did produce copies of the judgments he claimed to have cited. The judge considered the counsel's conduct to be more severe than simply misattributing a minority opinion; it was the presentation of a series of non-existent Supreme Court rulings. He explicitly noted that Adv. Faris did not claim these were fabrications by an AI tool that he failed to check (unlike the Mata v. Avianca case). Instead, Adv. Faris maintained that he himself had prepared these "summaries" after reading the cases. Due to the severity of this conduct—presenting fabricated Supreme Court "quotations" and misrepresenting their origin—the judge ordered the matter to be referred to the Legal Department of the Court Administration for consideration of further action. Separately, the defendant's underlying request (to send clarification questions to a court-appointed expert) was granted. The judge found that the "severe misconduct" of the plaintiff's counsel constituted a "special reason" to allow this, even though the defendant had previously waived the opportunity. The plaintiff was ordered to pay the defendant NIS 600 for legal fees related to this part of the motion. (Summary by Gemini 2.5) |
|||||||||
|
Source: AI4Law
|
|||||||||
| Industria de Diseño Textil, S.A. v. Sara Ghassai | Canadian Intellectual Property Office (Canada) | 12 August 2024 | Lawyer | Implied |
Fabricated
Case Law
(1)
|
Warning | — | — | |
| Mr D Rollo v. Marstons Trading Ltd | Employment Tribunal (UK) | 1 August 2024 | Pro Se Litigant | ChatGPT |
Misrepresented
Legal Norm
(1)
|
Claim dismissed; AI material excluded from evidence under prior judicial order; no sanction but explicit judicial criticism | — | — | |
AI UseThe claimant sought to rely on a conversation with ChatGPT to show that the respondent’s claims about the difficulty of retrieving archived data were false. Ruling/SanctionNo formal sanction was imposed, but the judgment made clear that ChatGPT outputs are not acceptable as evidence. Key Judicial ReasoningThe Tribunal held that "a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable". |
|||||||||
| Dukuray v. Experian Information Solutions | S.D.N.Y. (USA) | 26 July 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(3),
Legal Norm
(2)
|
No sanction; Formal Warning Issued | — | — | |
AI UsePlaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation. Hallucination DetailsThree nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues. Ruling/SanctionThe court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks. Key Judicial ReasoningReliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused. |
|||||||||
| Joe W. Byrd v. Woodland Springs HA | Texas CA (USA) | 25 July 2024 | Pro Se Litigant | Unidentified | Several garbled or misattributed case citations and vague legal references | No formal sanction | — | — | |
AI UseThe court does not confirm AI use but references a legal article about the dangers of ChatGPT and states: “We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.” Ruling/SanctionThe court affirmed the trial court’s judgment, found no preserved or adequately briefed grounds for appeal, and declined to address the vague or unsupported references. No explicit sanction or costs were imposed for the apparent AI-related deficiencies. |
|||||||||
| Anonymous v. NYC Department of Education | S.D.N.Y. (USA) | 18 July 2024 | Pro Se Litigant | Unidentified |
Fabricated
Case Law
(1)
|
No sanction; Formal Warning Issued | — | — | |
AI UseThe plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI. Hallucination DetailsSeveral fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious. Ruling/SanctionNo sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency. Key Judicial ReasoningThe court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct. |
|||||||||
| Lakaev v McConkey | Supreme Court of Tasmania (Australia) | 12 July 2024 | Pro Se Litigant | Implied |
Fabricated
Case Law
(1)
Misrepresented
Case Law
(1)
|
Appeal dismissed for want of prosecution | — | — | |
|
The appellant's submissions included a misleading reference to a High Court case, De L v Director-General, NSW Department of Community Services, misrepresenting its relevance to false testimony, which was not the case's subject matter, and a fabricated reference to Hewitt v Omari [2015] NSWCA 175, which does not exist. The appeal was dismissed, considering the lack of progress and potential prejudice to the respondent. |
|||||||||
| Zeng v. Chell | S.D. New York (USA) | 9 July 2024 | Pro Se Litigant | Implied | Fabricated citations | Warning | — | — | |