Blog/How Law Firms Use AI 2026

Legal AI Overview

How Law Firms Are Actually Using AI in 2026

·8 min read

Written by Daniel Hartnett

Last updated: March 2026

The conversation in legal has shifted. Two years ago, AI at a law firm usually meant a pilot program that a few curious partners were watching from a safe distance. Today, the question isn't whether to adopt AI. It's where, and how, and which tool actually fits the workflow.

What follows is a grounded look at the four use cases where adoption is actually happening: legal research, contract drafting and review, document review and due diligence, and litigation support. For each one, I'll name the tools firms are turning to and what they're being used for. This isn't a product review. If you want a deeper breakdown of specific tools, the Best Legal AI Tools overview covers each one in detail.

A note on my perspective

I sold CoCounsel, Westlaw, and Practical Law to attorneys and law firms through my time at Thomson Reuters. My perspective on legal AI comes from the sales side: understanding how firms evaluate these tools, what their due diligence looks like, and where procurement conversations tend to stall. For tools I haven't sold directly, I'm drawing on what firms report publicly, vendor documentation, and the adoption patterns I've seen play out. Where I'm speaking from direct experience, I'll say so.

Legal research

Legal research is where AI adoption is most advanced, and for good reason. It's the workflow most amenable to large language models: the input is a legal question, the output is a structured summary of authority, and the tools can be evaluated against a known standard of what good research looks like.

CoCounsel is the product I know best here. At Thomson Reuters, I watched it go from a novel demo to something attorneys were actually depending on. The integration with Westlaw and Practical Law is the core value: CoCounsel can build a research plan, pull authority from Westlaw, verify that cited cases are still good law, and deliver structured work product within a single workflow. Since launching in March 2023 as the first legal AI assistant built on GPT-4, the platform has expanded significantly and now draws on multiple frontier AI models alongside proprietary Thomson Reuters technology. CoCounsel Legal, launched in 2025, adds agentic AI capabilities for deeper, multi-step research workflows.

For firms already running on LexisNexis, Lexis+ AI serves the same function from the other side of the research duopoly. In February 2026, LexisNexis rebranded the product to Lexis+ with Protégé, reflecting expanded capabilities across both legal and general AI tasks. The platform now runs on a combination of Anthropic's Claude and OpenAI models. Firms report that the database depth and the natural fit with existing LexisNexis workflows are the main reasons they stay with it. The Lexis+ AI vs CoCounsel comparison goes deeper on how to choose between them.

Harvey is known for research at the enterprise end of the market. Firms report that Harvey handles complex, multi-jurisdiction research and regulatory analysis with strong output quality. The platform is built on Azure OpenAI infrastructure with a custom-trained model developed in partnership with OpenAI, and its Intapp integration makes firm-wide AI use auditable across client matters, which is a meaningful selling point for large firms with strict conflict management requirements.

The practical question for most firms is simpler than it looks: if you're on Westlaw, start with CoCounsel. If you're on LexisNexis, start with Lexis+ AI. The integration advantage is real, and switching research databases to accommodate an AI tool is not a trade worth making for most practices.

Contract drafting and review

Contract work is the second major area of adoption, and here the tools diverge more sharply by workflow.

Spellbook is the most widely used tool in this space for firms whose attorneys draft primarily in Microsoft Word. Powered by GPT-4o, it works as an add-in inside Word: attorneys draft as they normally do, and Spellbook reads the document in context, flags deviations from market standards, and suggests language. Firms report that it benchmarks contract terms against more than 2,000 industry standards, which gives junior associates and in-house counsel a fast way to identify where a draft diverges from market norms without a senior attorney reviewing every clause. Spellbook is used by approximately 4,000 law firms and in-house legal teams. Spellbook Associate, its more recent agent product, handles more complex drafting workflows. The Harvey vs Spellbook comparison covers the two most common tools at the top of this category in detail.

Harvey is also used for complex contract drafting, particularly at firms where the drafting work involves multi-party agreements, cross-border transactions, or regulatory-heavy documents where accuracy under ambiguity matters. Firms report that Harvey's output on complex drafting tasks is strong, though the platform is typically accessed through a full enterprise procurement process rather than a self-serve trial.

CoCounsel has contract drafting capabilities as part of its broader research and drafting workflow, and for firms already running it for research, it's a natural extension. The differentiation here is less about which tool drafts better in isolation and more about where the contract work sits in the broader workflow.

Document review and due diligence

Document review and due diligence are where the specialist tools have the clearest advantage over general-purpose platforms. The volume demands are different, the accuracy requirements are more structured, and the value of a proprietary legal model trained on contract data shows up most clearly here.

Luminance takes a notably different approach to AI than most tools in this market. Rather than building on a third-party model, Luminance trained its own proprietary model on over 150 million verified legal documents, giving it the ability to recognize more than 1,000 specific legal concepts. Firms report that its traffic light risk system allows for rapid issue flagging across large document sets, and that Luminance Autopilot, which enables autonomous NDA negotiation, is a capability they haven't found elsewhere. For firms doing high-volume M&A due diligence or managing large contract portfolios, the specialist legal training is a genuine differentiator.

Kira Systems, now part of Litera, has been in the contract review space longer than most of the tools in this market. In January 2026, Litera launched a significant platform enhancement: a hybrid AI architecture that combines generative AI with Kira's own proprietary models trained on over one million legal contracts. Firms report that Kira's 1,400 pre-trained clause types and the new Generative Smart Fields feature, which allows custom extraction fields without prior training, make it well suited for structured due diligence workflows where consistency and auditability are as important as speed. Kira is used by 71% of Fortune 100 companies across M&A, real estate, banking, and IP matters.

Harvey also offers document review capabilities through its Vault product, and firms doing large-scale review report that it fits naturally into the same platform used for research and drafting. The distinction between Vault and the specialist tools comes down to depth: Luminance and Kira are purpose-built for this workflow; Harvey's Vault is part of a broader platform where research is the primary use case.

Litigation support

Litigation is where AI adoption has been slower and more uneven, partly because the stakes are higher and the tolerance for error is lower, and partly because the workflow is less linear than contract review or research.

Harvey is the most frequently cited tool among litigators at large firms. Firms report that it handles brief writing, motion drafting, regulatory analysis, and case strategy summarization with output quality that holds up under attorney review. Harvey's Workflow Builder, which allows multi-step litigation tasks to be automated, is a capability that firms are beginning to build into their standard workflows rather than treating as a one-off tool.

CoCounsel is the natural fit for litigators already using Westlaw for case law research. The integration means that a litigator can move from identifying relevant authority to verifying citations to incorporating them into a brief within a single platform. From my time selling it at Thomson Reuters, the firms that got the most value from CoCounsel in litigation were the ones who used it as a research and drafting workflow rather than just a search overlay.

Lexis+ AI serves a similar function for litigators on LexisNexis, with the same research-database advantage: access to the full LexisNexis case library with AI-generated summaries and analysis in context. Firms report that the integration with Lexis Create+ makes it easier to move from research into drafting without switching tools.

What adoption actually looks like in practice

Most firms are not rolling out a single AI tool firm-wide. The pattern I've seen more often is adoption by practice group, driven by one or two attorneys who use a tool consistently and build enough internal credibility to prompt a wider evaluation. Research tools tend to get traction first because the feedback loop is immediate: you ask a legal question, you get an answer, and you can verify it against what you already know.

The firms that struggle with adoption are usually the ones that bought a tool that looked impressive in a demo but didn't map to how their attorneys actually work. A litigation practice that buys a contract review tool will underuse it. A corporate practice that buys a research-first platform without thinking about document review will hit a ceiling quickly. The use case has to come first.

Data privacy and professional responsibility are the questions that slow procurement down more than price. The firms that move fastest through evaluation are the ones that arrive with specific questions: does this tool use our documents to train or improve the model? Which AI providers process our data? How are conflicting client matters handled? Vendors have detailed answers to these questions, and the answers vary significantly across tools.

Legal AI is also not the only software category where firms are making evaluation decisions right now. Many of the firms I've spoken with are running parallel evaluations across legal AI and other business software, including CRM platforms for business development and client management. If you're evaluating software in more than one category, the ViewSpectra CRM assessment is a useful starting point for the CRM side of that process.

About the author

Daniel Hartnett

Daniel Hartnett

LinkedIn

Daniel Hartnett is the founder of ViewSpectra. He has held sales roles at Thomson Reuters and U.S. Bank across enterprise software and financial services. He built ViewSpectra to help businesses make better technology decisions without relying on vendor-sponsored rankings.

Some links on this page may be affiliate or referral links. ViewSpectra may earn a commission at no extra cost to you. This does not influence our recommendations.

Free · 5-question assessment

Not sure which tool fits your firm?

Answer 5 questions about your firm size, use case, budget, and integration needs. We'll match you to the right Legal AI tool.

Take the free Legal AI assessment