This database tracks legal cases1
I.e., all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
See excluded examples.
in which generative AI was used to make an argument or to prove a point - with the exclusion of hallucinations (which are tracked here).
While seeking to be exhaustive (10 cases identified so far), it is a work in progress and will expand as new examples emerge.
If you know of a case that should be included, feel free to
contact me.
Case | Court / Jurisdiction | Date | Party Using AI | AI Tool | Nature of Output | Outcome | Details | |
---|---|---|---|---|---|---|---|---|
Ferlito v. Harbor Freight Tools USA | E.D.N.Y. (USA) | 23 April 2025 | Expert | ChatGPT | ChatGPT was used post hoc to confirm an expert’s conclusion | Expert testimony admitted over defendant’s objections | ||
AI UsePlaintiff’s expert, Mark Lehnert, used ChatGPT after writing his report to verify whether his proposed method for securing a tool head to its handle was commonly accepted. During a Daubert hearing, he stated he was “quite amazed” to find that ChatGPT’s answer supported his view. He explicitly confirmed he did not rely on the AI response to form his conclusions. Ruling/SanctionDefendant’s motion to exclude the expert was denied. The court found Lehnert’s lack of formal engineering credentials did not undermine his long practical experience. His use of ChatGPT as a corroborative tool did not affect admissibility, particularly as the AI was not relied on in forming the conclusions. The court emphasized this distinction and drew parallels with Kohls v. Ellison, Mata v. Avianca, and Park v. Kim to illustrate the limits of acceptable AI use. Key Judicial ReasoningThe court held that Lehnert was qualified based on decades of relevant experience, and his methodology—proposing a known safer alternative and pointing to existing products that use it—met Rule 702’s reliability threshold. ChatGPT was not used improperly, as the expert had already reached his conclusions independently. The court distinguished this case from Kohls (where fake articles were cited) and Mata (where fabricated precedents were submitted), finding Lehnert’s use of AI did not impair the integrity or reliability of his testimony. |
||||||||
Ross v. USA | D.C. C.A. (USA) | 20 February 2025 | Judge (Dissenting) | ChatGPT | Answers to “what-if” queries common-sense inferences | Majority reversed conviction for insufficiency of evidence; dissent’s AI-aided arguments did not carry the day | ||
AI UseIn his dissent, Judge Deahl systematically leveraged ChatGPT as a proxy for “common knowledge beyond a reasonable doubt.” By submitting detailed prompts about canine heatstroke and cold-weather tolerance, he extracted model outputs that he reframed as judicial inferences: an unequivocal finding of harm under the facts here versus a qualified response in a colder-weather scenario. This comparison underscored his view that inferring harm from the record was as reliable as querying an LLM. Key Legal ReasoningThe majority held that the government’s evidence—absent direct proof of the car’s interior temperature or observable distress symptoms—could not sustain a conviction under D.C. Code § 22-1001’s requirement of “proper protection from the weather” particularized to this dog and context. They emphasized that “common sense cannot substitute for evidence” when critical factual gaps exist and reversed the judgment of conviction, directing entry of acquittal for insufficient proof. The dissent countered that both the lay witness testimony, emergency dispatch records and an experienced animal-control officer’s opinion established a “plain and strong likelihood” of harm—an inference he bolstered with ChatGPT outputs—yet this reasoning did not command a majority. |
||||||||
Source: Volokh | ||||||||
Aleto Beheer BV v. Venlo Municipality | Dutch Council of State (Netherlands) | 29 January 2025 | Lawyer | ChatGPT | Use of ChatGPT to produce generalized market claims | Arguments rejected; No formal sanction but judicial disqualification of the AI-sourced material. | ||
AI UseAleto submitted, shortly before hearing, a supplementary document claiming that environmental zoning category differences significantly impact real estate values in North Limburg. The document’s information was obtained via ChatGPT. The prompt/question to ChatGPT was not submitted, nor were sources or independent verifications provided. Ruling/SanctionThe Court refused to consider the ChatGPT-based information as valid evidence. It emphasized that real estate valuation disputes involve complex expertise that cannot be substituted by AI outputs. Aleto’s appeal was dismissed, and the Council expressly reaffirmed that without a proper independent expert report, ChatGPT statements are legally worthless. Key Judicial ReasoningJudicial decision-making requires rigorously tested, verifiable inputs. AI outputs that do not disclose the input question or underlying data, and that disclaim reliability themselves, cannot satisfy this standard. Especially in technical fields like property tax and environmental valuation, human expert reports, not AI summaries, are mandatory. |
||||||||
Plaintiff v. Minister of Asylum | The Hague District Court (Netherlands) | 6 November 2024 | Lawyer | ChatGPT | Use of ChatGPT output as authority for factual assertions about surveillance practices. | Argument discounted; No formal sanction but judicial criticism recorded. | ||
AI UseDuring the hearing, the plaintiff’s representative cited an answer generated by ChatGPT to argue that the Moroccan authorities systematically monitor political dissidents abroad, implying a risk of persecution on return. However, the representative failed to provide the actual question, the ChatGPT output, or any independent corroboration. Ruling/SanctionThe Court held the ChatGPT output legally irrelevant and gave it no probative value. While it did not impose sanctions on the plaintiff’s counsel, it criticized the reliance on unverifiable AI content in judicial proceedings. The plaintiff's asylum appeal was ultimately dismissed. Key Judicial ReasoningThe Court emphasized that judicial decisions must be based on verifiable evidence. AI-generated content without transparent sourcing or record authentication fails even minimal evidentiary standards. Citing such outputs does not meet the burden of proof for substantiating claims of future persecution. |
||||||||
X v. Y | Netherlands | 26 July 2024 | Judge | ChatGPT | To assist in calculating technical elements | |||
Hussain vs State of Manipur | Manpur High Court (India) | 23 May 2024 | Judge | GPT3.5 | Background Knowledge | |||
Court disclosed using "Google and ChatGPT 3.5" to do extra research on factual details that Counsel has failed to cover. |
||||||||
Source: Alvin Antony | ||||||||
J.G. v. NYC Department of Education | S.D.N.Y. (USA) | 22 February 2024 | Lawyer | GPT4 | Relied on GPT4 to argue fee rates | Judicial Rebuke and Rate Discount in Fees Award | ||
AI UseThe Cuddy Law Firm used ChatGPT-4 to purportedly validate and support its request for elevated attorney billing rates in its motion for attorneys’ fees under the IDEA. They invoked ChatGPT as a “cross-check” for the reasonableness of their requested rates ($550–$600 per hour for senior lawyers, $375–$425 for associates). Hallucination DetailsNo fake cases or authorities cited. However, the Court found the reliance on ChatGPT-4 wholly inappropriate, calling it “utterly and unusually unpersuasive” and emphasizing that ChatGPT’s conclusions lacked transparency, reliability, and any grounding in actual legal practice or precedent. The court compared this misuse to the notorious ChatGPT hallucination cases (Mata v. Avianca and Park v. Kim). Ruling/SanctionThe Court reduced the Cuddy Law Firm’s requested fee rates significantly (e.g., from $550 down to $400/hour for senior lawyers, and proportionately for others) and explicitly warned against using ChatGPT or similar tools as evidence in fee petitions. No financial sanction was imposed, but the Court expressed clear disdain and advised against repeating this practice. Key Judicial ReasoningThe Court reaffirmed that billing rates must be based on prevailing legal market conditions and judicial precedent, not unverifiable or speculative AI outputs. The opinion underscores that AI tools, absent verifiable support, cannot serve as evidence in legal argumentation for judicial decision-making. |
||||||||
Louboutin v. the Shoe Shop | Delhi High Court (India) | 22 August 2023 | Lawyer & Judge | ChatGPT | To establish reputation | Tool cannot be relied upon because answers change depending on the query | ||
After counsel for plaintiff provided an answer from ChatGPT-3.5 about whether Louboutin is known for spiked men shoes, the court pointed out that "Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial Court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor." The judgment then includes two screenshots of answers from ChatGPT-3.5., and the Court concluded: "28. The above responses from ChatGPT as also the one relied upon by the Plaintiffs shows that the said tool cannot be the basis of adjudication of legal or factual issues in a court of law. The response of a Large Language Model (LLM) based chatbots such as ChatGPT, which is sought to be relied upon by ld. Counsel for the Plaintiff, depends upon a host of factors including the nature and structure of query put by the user, the training data etc. Further, there are possibilities of incorrect responses, fictional case laws, imaginative data etc. generated by AI chatbots. Accuracy and reliability of AI generated data is still in the grey area. There is no doubt in the mind of the Court that, at the present stage of technological development, AI cannot substitute either the human intelligence or the humane element in the adjudicatory process. At best the tool could be utilised for a preliminary understanding or for preliminary research and nothing more." |
||||||||
Source: Alvin Antony | ||||||||
Jaswinder Singh v. State of Punjab | Punjab HC (India) | 27 March 2023 | Judge | ChatGPT | Approach to Legal Scenario | |||
"9. To further assess the worldwide view on bail when the assault was laced with cruelty, the use of Artificial intelligence platform which has been trained with multitudinous data was made. The following question was put to ChatGPT Open AI [https://chat.openai.com/chat]: 10. What is the jurisprudence on bail when the assailants assaulted with cruelty? Response of ChatGPT: [...] 11. Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial Court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor." |
||||||||
Salvador Espitia Chavez v. Salud Total | Colombia | 1 January 2023 | Judge | ChatGPT | Legal Principles; Drafting | |||
In preparing the decision, the court clerk formulated a series of targeted prompts to ChatGPT—first asking for an overview of statutory provisions and constitutional principles mandating health‐care exemptions for minors with disabilities, then requesting a concise analysis of Colombian jurisprudence on waiving copayments in cases of proven financial hardship. The AI’s responses were integrated verbatim into the draft opinion where they summarized both the legal framework and precedent. |
||||||||
Source: Alvin Antony |