Law Firm AI Adoption: So Many Choices

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report

One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.

But 43% say their firms have no policy for the use of these AI tools. Only 9% have current guidelines in place. Seventy-one percent say their firms have no current training on the responsible use of AI. Only 54% say their firms have adopted legal specific AI tools. I know. It’s all too easy to do nothing and stick your head in the sand.

Importantly, the study included responses from over 1,300 legal professionals of whom 45% were solo practitioners who are self-governed and trained.

But There May be More to the Story

But there could be something else at work. Firms are currently faced with a myriad of AI choices. Just walking the exhibit hall at Legalweek, I noticed just about every vendor was offering some kind of AI tool. How do firms know which one is the best for their client mix? And for that matter, is a single tool best for every practice group? Or for every lawyer? The fact is, they are not. That means firms must either make some hard choices (never easy in a consensus decision-making organization) or buy a multitude of tools.

Or throw up their hands and do nothing, letting individual lawyers decide for themselves, which drives them to the more public models like ChatGPT, Claude, or Gemini.  Which could in part explain the 8am findings. It’s hard and time consuming for law firms to drink from the firehose of AI information and determine what is best for all their legal professionals and lines of business. So, they just ignore the issue altogether.

But having no guidelines or training risks all kinds of problems as we well know. Firms have to recognize lawyers are using the general tools and take steps to be sure it’s done responsibly.

The Hodge-Podge Solution

The other option to which some firms resort is overcorrection and try to do too much. These firms turn to a variety of AI tools and experiment with AI across practice groups and lines of business. Driven in part by FOMO, firms are purchasing pilots and letting lawyers play with them. When it doesn’t suit all, they go to the next one and on and on.

They don’t think through what work or problems they need the AI tools to do and solve. They don’t take time to look at workflows and what lawyers need to practice day to day. It’s a hodge-podge solution.

But that too carries risks. I recently ran across an article in Artificial Lawyer by Ben Nicholson, the General Manager of Clio’s UK Enterprise. The thrust of the article was that firms employ various systems to do various tasks but that switching between them costs time and energy. As Nicholson puts it, “Fee earners jump between systems that were never designed to operate as one environment: time in one place, documents in another, client and matter data somewhere else again.” And when it comes to AI adoption, all too often matters are not defined correctly, workflows vary by team and location, so outputs drift. And as a result, workers get frustrated.

I hearken back to my days practicing. All too often, the clunkiness of the various systems with which I had to work resulted in me just going outside firm software to Google, Dropbo,x and other tools that just worked. (I hope our IT Director doesn’t read this.) The problem was I was busy and didn’t have time or energy to figure out how to use other tools.

If a firm’s internally adopted AI tool is clunky or doesn’t work well with other systems, then lawyers are going to flock to the easy-to-use private systems, particularly since they are already using and familiar  with them. They don’t cost much and they are easy to work with. Secondly, if a firm has multiple internal systems, finding the best one to use for various tasks also takes time and energy. If not done, or not done correctly, it will also lead lawyers to the public systems.

What Does This Tell Us?

First and foremost, firms need to recognize lawyers are going to use and are using the publicly available systems. It’s a fact of life. And for some things, let’s face it, that makes sense. So, trying to stop them from using these tools is futile and counterproductive.

Moreover, creating rules that aren’t grounded in reality breeds disrespect and risks those affected will conclude all the rules are rules for the sake of rules. The result is they simply ignore the lot of them.

All of which means whatever firms decide, at the very least they need to double down on training and making sure lawyers and legal professionals know when to use and perhaps more importantly when not to use these systems. Realistic guidelines are important, even if a firm does not adopt an AI tool across the board. Even solos, for whom the notion of formal guidelines seems silly, still need to investigate and understand the AI systems.

Relatedly, law firms need to recognize that these systems are getting better and can do more of the work lawyers need. That’s not going to change in the future so what is imposed today may make little sense tomorrow.

Firms need to recognize that, if they do decide to adopt internal systems, those systems need to be intuitive and easy to use. They need to help lawyers do their specialized work — what a litigator needs from an AI system is far different than what a trusts and estate lawyer needs. So, purchasing decisions need to be made with this in mind. For some firms, one system is fine. For others, various systems may be necessary. But decisions require careful analysis of the workflows and what the tools can do second.

Finally, firms need to be wary of the AI (and other technology) creep. Trying to house too many systems and have them work together can produce the unintended result of lawyers not using any of them and turning to the public, low-cost tools for things that, for now, they shouldn’t be.

Bottom line, firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools. Neither doing nothing nor buying multiple AI platforms is the solution. Spending the time to study and investigate the problems and the solution on an ongoing basis is. So, managing partners, roll up your sleeves and do the hard work.


Stephen Embry is a lawyer, speaker, blogger, and writer. He publishes TechLaw Crossroads, a blog devoted to the examination of the tension between technology, the law, and the practice of law.

The post Law Firm AI Adoption: So Many Choices appeared first on Above the Law.