Research Report2024 Edition

Thomson Reuters: Generative AI in Law — A Practical Guide

Analysis of AI adoption in law: 35% of legal work can be automated based on 3,000+ firm data

Published January 1, 20242 min read
All Research

Executive Summary

Comprehensive analysis of AI adoption in the legal sector. Based on data from 3,000+ law firms and corporate legal departments. 35% of legal work can be enhanced by current AI. Top use cases: document review (72% adoption), legal research (65%), contract analysis (58%). Includes ROI framework for legal AI investments.

Thomson Reuters' practical guide addresses the legal profession's distinctive relationship with generative AI—a technology that simultaneously promises transformative productivity gains and poses unique risks in a domain where accuracy, attribution, and professional accountability are paramount. The guide provides law firms and corporate legal departments with a structured framework for evaluating, piloting, and scaling generative AI tools across legal workflows including research, document drafting, contract analysis, due diligence, and client communication. Particular attention is given to the profession-specific risks of AI hallucination in legal contexts where fabricated case citations can constitute professional misconduct, confidentiality obligations that constrain the use of cloud-based AI services for client matter work, and the evolving regulatory landscape governing AI-assisted legal practice across jurisdictions. The guide's practitioner-oriented approach delivers actionable recommendations rather than theoretical frameworks, reflecting Thomson Reuters' deep understanding of legal workflow realities.

Published by Thomson Reuters (2024)Read original research →

Key Findings

49%

Legal research time decreased substantially when attorneys used GenAI tools with jurisdiction-specific training and citation verification

Reduction in average legal research duration for common commercial law queries when using GenAI research assistants validated against jurisdictional case law databases.

93%

Contract review automation achieved high accuracy in identifying non-standard clauses and deviation from playbook terms

Clause identification accuracy for GenAI contract review tools across merger agreements, NDAs, and supply contracts when measured against senior associate review as baseline.

12%

Hallucination risk in legal AI outputs necessitated mandatory human verification workflows before client-facing deliverable release

Of GenAI-generated legal citations referenced non-existent cases or incorrectly characterised holdings in pre-verification testing, underscoring the imperative for attorney review.

31%

Law firm billing model disruption accelerated as AI efficiency gains compressed time-based fee structures

Of AmLaw 200 firms reported active exploration of alternative fee arrangements driven partially by client awareness that AI reduces attorney hours for routine legal work.

Abstract

Comprehensive analysis of AI adoption in the legal sector. Based on data from 3,000+ law firms and corporate legal departments. 35% of legal work can be enhanced by current AI. Top use cases: document review (72% adoption), legal research (65%), contract analysis (58%). Includes ROI framework for legal AI investments.

About This Research

Publisher: Thomson Reuters Year: 2024 Type: Applied Research

Source: Thomson Reuters: Generative AI in Law — A Practical Guide

Relevance

Industries: Professional Services Pillars: AI Readiness & Strategy Use Cases: Document Processing & Automation Regions: Southeast Asia

The guide identifies legal research augmentation as the highest-value near-term application, where generative AI dramatically accelerates the identification and synthesis of relevant case law, statutes, and regulatory guidance. Contract review and analysis represents the second priority, with AI tools capable of extracting key provisions, identifying unusual clauses, and comparing terms against benchmark positions at speeds that transform the economics of large-scale document review. Document drafting assistance—where AI generates initial drafts that lawyers refine—offers the third major value opportunity, though the guide cautions that output quality varies substantially across document types and complexity levels.

The guide devotes particular attention to the hallucination risk that carries uniquely severe consequences in legal contexts. AI-fabricated case citations have already resulted in judicial sanctions and professional discipline proceedings, creating acute reputational and liability exposure. Recommended safeguards include mandatory verification workflows where all AI-identified legal authorities are confirmed against authoritative databases before citation, retrieval-augmented generation architectures that ground AI outputs in verified legal sources, and clear firm-wide policies prohibiting the submission of AI-generated legal content without human verification.

Confidentiality and Data Governance

Legal professional obligations regarding client confidentiality impose constraints on generative AI deployment that do not apply in most other professional contexts. The guide provides detailed guidance on evaluating AI vendor data handling practices, implementing technical controls that prevent client information from entering model training pipelines, and establishing matter-specific AI usage policies that account for varying confidentiality sensitivity levels across different client engagements. On-premises and private cloud deployment options are evaluated for organisations with the most stringent confidentiality requirements.

Key Statistics

49%

faster legal research with jurisdiction-validated GenAI tools

Thomson Reuters: Generative AI in Law — A Practical Guide
93%

clause identification accuracy in AI-assisted contract review

Thomson Reuters: Generative AI in Law — A Practical Guide
12%

of AI-generated legal citations were hallucinated or incorrect

Thomson Reuters: Generative AI in Law — A Practical Guide
31%

of major law firms exploring AI-driven alternative fee models

Thomson Reuters: Generative AI in Law — A Practical Guide

Common Questions

The guide recommends a multi-layered approach comprising mandatory human verification workflows where all AI-identified legal authorities are confirmed against authoritative databases before citation in any legal document, retrieval-augmented generation architectures that constrain AI outputs to verified legal sources rather than allowing unconstrained generation, clear firm-wide policies explicitly prohibiting the submission of AI-generated legal content without practitioner verification, and regular training programmes that educate lawyers about the specific failure modes of generative AI in legal contexts so they can apply appropriately calibrated scrutiny to AI-assisted outputs.

The guide recommends evaluating AI vendor data handling practices with specific attention to whether client data could enter model training pipelines, implementing technical controls such as data loss prevention systems that prevent sensitive client information from being transmitted to external AI services, establishing matter-specific AI usage policies that restrict AI tool use based on the confidentiality sensitivity of each engagement, and considering on-premises or private cloud deployment for the most sensitive applications. Additionally, client engagement letters should be updated to disclose AI tool usage and obtain informed consent for AI-assisted work product where professional conduct rules require such disclosure.