Amid the legal industry’s ongoing shift from AI experimentation to AI implementation, a Legalweek panel explored a counterintuitive question this past Thursday: “If you’re *not* using AI, are you committing malpractice?”
Sponsored by Steno and led by Joe Stephens, the company’s legal solutions director, the panel drew inspiration from recent statements by Manhattan federal judge Jesse Furman, who suggested that failing to use AI could create its own legal risks, like fee disputes.
In addition to his role at Steno, Stephens serves as a clinical lecturer at Texas Tech University School of Law. Law students now have unprecedented access to information, he noted at Legalweek, but they also face a constant flow of new questions about how to utilize it.
“That, to me, is the interesting pivot here around this ever-evolving idea of ‘malpractice,’” he says. “We have this industry standard for how the profession has existed. Now, there’s this sort of complete interruption with the tech that we’re seeing be deployed.”
In a wide-ranging, hourlong discussion, the Legalweek panelists also explored the evolving definition of “competence,” the risks of “razzle dazzle” from AI applications, and parallels to pharmaceutical litigation, among other topics.
Here are some takeaways. (The panelists’ opinions are their own, and not necessarily those of their organizations.)
‘Rare-but-Serious Side Effects’
You wouldn’t think that AI tools have a close analogue in the pharmaceutical industry. But according to Shannon Boettjer of Jaspan Schlesinger Narendran LLP, there are strong similarities in how these products exist in the marketplace.
With both drugs and AI tools, there are inherent risks and there are inherent benefits, she notes. There are some uses that present major benefits and minimal risk, and some that produce fewer benefits and larger risks.
“When you take a drug, this is what I would suggest to you as a pharma litigator: Pull out those instructions and look for common side effects, and then keep reading down, and go to rare-but-serious side effects,” she says. “Rare-but-serious side effects are the really high-risk situations. We’re still figuring out what some of the rare-but-serious side effects are with AI.”
To find the sought-after uses that bring big benefits with lower risk, lawyers can start by leveraging tools to create items that are not the lawyer’s final work product.
“If you used to take 15 hours to do a deposition digest, and now you can have a tool digest for you and cut that down, it’s an enormous immediate benefit,” Boettjer says.
The human is still actually reading the depositions and creating arguments, she notes. And as long as someone higher up is involved in the process there’s little risk — including to skills development.
“The tedium of finding the pages and typing the pages, that did not make me the lawyer I am today,” she says.

‘Build Fast, Break Fast’
While pharma and AI may have some strong parallels, the legal and technology industries are more like polar opposites.
This is particularly true with the “build fast, break fast” ethos common in Silicon Valley, notes Allison Harbin, the AI portfolio manager at Jenner & Block.
While a technology company may dismiss a failure as an “edge case,” a law firm, of course, cannot operate that way or accept that premise.
Law firms need to instead think through how they can create a governance framework that will protect themselves and their clients, while still maintaining the flexibility to accommodate rapid changes in technology, Harbin says.
“I think when we’re looking at that, it’s involving, ‘What is a “competency”?’ What does it mean to be ‘competent’ in AI right now?” she asks. “And that is always a moving bar. And I think it depends on how much you are using it, right? And it depends on how you’re using it.”
Advances in AI can also raise questions surrounding the definition of lawyer “competence” more broadly.
If AI can perform the majority of tasks currently assigned to junior associates, Stephens asks, does this change the definition of what makes a “competent” lawyer? Or is it merely a question of shifting economics, where AI upends the billable hours junior associates can maintain?
Matthew Krengel, director of information retention counseling at Cooley LLP, worries that inexperienced attorneys may fail to develop the analytical thinking skills necessary to properly refine and fact check the work done by AI.
Developing these skills, he says, requires doing the very work that is being outsourced to technology. He wonders if the dynamic is similar to a “snake eating its own tail,” where clients request significant AI use, but this reliance on AI prevents the next generation of lawyers from being able to effectively finalize what AI produces.
“To the extent we’re seeing this, where it’s going to go, I’m very concerned about the competency of the new attorneys and those that need to be learning as they go,” he says.

‘The Razzle Dazzle’
The panelists also expressed concern about the role of automation bias, or the human tendency to believe machine-produced results.
Boettjer notes these concerns long pre-date AI, citing fatal car accidents resulting from faulty GPS instructions that influenced drivers to go over a cliff.
The advances with agentic AI in particular could supercharge these types of concerns.
Harbin remembers attending a recent tech industry conference and seeing a use case presented where multiple AI agents were linked.
“One agent made, I don’t know, the blueprint for a sneaker, the other one did the marketing campaign, the other one did the emails for the marketing campaign,” she says. “And they were like, ‘Look at this world where everything is automated. We remove the human in the loop because they’re so cumbersome.’”
In law firms, of course, similarly removing the “human in the loop” ought to be a nonstarter for most endeavors, because of the risks involved, the panelists agree. And even for more low-risk industries, it could be inviting serious problems.
Lawyers also run the risk of “being blown away by the razzle dazzle” of AI applications, which can, for example, create a data analytics dashboard that appears extremely high-quality at a glance.
These types of uses can make outputs appear more substantive than they are, Harbin notes.
“That razzle dazzle is a big risk,” she says. “And you need to get past that phase of: ‘Whoa, I can’t believe it did this,’ and start thinking about: ‘But is this useful?’”
This panel and coverage are sponsored by Steno. From your technical needs to deferred payment options, Steno is revolutionizing the way court reporting and litigation support services are done. Learn more here.
The post What Even Is AI ‘Competence’? It Depends. appeared first on Above the Law.