federal-judge-uses-chatgpt-to-make-his-decision-and-it-makes-more-sense-than-you’d-think

Federal Judge Uses ChatGPT To Make His Decision And It Makes More Sense Than You’d Think

robot-lawyer-judge-legal-bot-chatbot-robolawyer-cyberlawyer-300×218To the extent federal judges acknowledge generative artificial intelligence at all, it’s to newsjack some cheap publicity by decrying it as a scourge or to cynically distract from discussing the ethical cesspool that’s developed on their watch. Which makes a recent Eleventh Circuit concurrence by Judge Kevin Newsom an intriguing change of pace. Because, over the course of a 32-page opinion, Newsom explains how he used ChatGPT and other large language model products to reach his decision.

And it makes a lot of sense.

An insurance company refused to cover its landscaper policyholder in a negligence claim over the installation of an in-ground trampoline. In fairness, that sounds like a deathtrap in the making if perfectly installed so I don’t know what a negligence claim would look like, but that’s neither here nor there. The question presented on appeal was whether or not installing a trampoline counts as “landscaping” based on the policy.

Judge Newsom begins his concurrence admitting that he agreed with the opinion in full but wished to “pull back the curtain on the process by which I thought through one of the issues in this case.”

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.

Charged with determining if “in-ground trampoline installation” met the ordinary meaning of landscaping, Newsom spun his wheels with traditional research before realizing that artificial intelligence systems trained on a wide swath of accumulated knowledge could probably provide a more “ordinary” answer than digging through cases:

And it was midway along that journey that I had the disconcerting thought that underlies this separate writing: Is it absurd to think that ChatGPT might be able to shed some light on what the term “landscaping” means? Initially, I answered my own question in the affirmative: Yes, Kevin, that is positively absurd. But the longer and more deeply I considered it, the less absurd it seemed.

As summarized by Stephanie Wilkins over at Legaltech News:

In the ensuing 22 pages (including an appendix), Newsom lays out in detail a number of benefits he believes LLMs bring to the “textualist toolkit … to inform ordinary-meaning analyses of legal instruments.” These include:

“LLMs train on ordinary-language inputs,” making them suitable for ordinary-meaning analysis, Newsom said.

“LLMs ‘understand’ context.”

“LLMs are readily accessible” to judges, lawyers and ordinary citizens.

“LLM research is relatively transparent,” more so, Newsom argues, than most dictionaries.

“LLMs hold advantages over other empirical interpretive methods.”

This is a straightforward contract case where plain language is relatively uncontroversial. But another federal judge has already previewed an extension of Newsom’s logic to a more odious application of “meaning.” John Bush, a judge placed on the Sixth Circuit by virtue of running a successful anti-gay blog, recently told a FedSoc chapter that he sees a future where generative AI can scan history to cherry-pick random news clippings to build originalist opinions. Why parse through acontextual history nuggets when a machine can instantly deliver only the ones relevant to reverse-engineering a decision to match the contemporary policy goals of the Republican Party!

That said, an ethical and transparent approach to using these tools should not be thrown out with the cynical FedSoc bathwater.

Suffice it to say, my interest was piqued. But I definitely didn’t want to fall into the trap of embracing ChatGPT’s definition just because it aligned with my priors. (Bad.) So, in what might have been a mistake—more on that later—we went ahead and asked it the ultimate question: “Is installing an in-ground trampoline ‘landscaping’”? ChatGPT responded as follows:

Yes, installing an in-ground trampoline can be considered a part of landscaping. Landscaping involves altering the visible features of an outdoor area for aesthetic or practical purposes, and adding an in-ground trampoline would modify the appearance and function of the space. It’s a deliberate change to the outdoor environment, often aimed at enhancing the overall landscape and usability of the area.

That’s a better “person on the street” meaning than combing years of insurance litigation opinions would give. And it’s a lot less likely to hallucinate something this clear-cut as opposed to the vagaries of the Montreal Convention on airline liability.

Less likely.

(Opinion available on the next page…)

11th Circuit Judge Uses ChatGPT in Deciding Appeal, Encourages Others to Consider It [Legaltech News]

Earlier: If You Aren’t The One Writing Your Briefs In Texas, It Better Be Some Other Human!

Originalist Judge Hails Future Where Time-Consuming Task Of Compiling Fake History Is Replaced By AI Hallucinations

John Roberts Once Again Uses Judiciary’s Annual Report To Express His Utmost Contempt For The Public

For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

CRM Banner