Skip to main content

A legal practitioner’s guide to AI & hallucinations

AI tools are rapidly transforming legal practice with promises of efficiency and cost savings. But as adoption accelerates, we're learning these tools are both powerful and prone to significant errors, including hallucinations. 

AI hallucinations occur when legal AI tools generate fabricated case citations, distorted holdings, or false procedural information that appears authentic but doesn't exist or is factually incorrect.

Who should read this

  • Attorneys 
  • Paralegals 
  • Judicial Officers 
  • Authorized justice practitioners 

Why this guidance matters

AI tools are transforming legal work with the ability to scan millions of cases, statutes, and regulations in seconds. These systems use machine learning, natural language processing, and large language models trained on vast legal datasets to "understand" legal terminology and concepts within their specific domains, and provide insights, identify relationships, and generate content requested by a user. 

Beyond serving legal professionals, AI is expanding access to legal help for people navigating the legal system without an attorney. Chatbots and virtual assistants can prepare legal materials and assist with governmental filings, making verification of AI outputs even more critical.

Using AI carries both responsibilities and risks for legal professionals, who may be tempted to overrely on AI output without adequate verification.

This guidance helps attorneys and other legal practitioners understand how generative AI works, what it does and does not do well, and how to use it responsibly.

Download the guide

Application in legal products

  1. Document analysis and review
  2. Legal research
  3. Predictive analytics
  4. Contract lifecycle management
  5. E-filing automation
  6. Self-help chatbot

Challenges & hallucinations

LLM's predictive nature generates text that sounds right rather than text that is right. This creates hallucinations that can be dangerously convincing.

Hallucinations can appear as: 

  • Fabricated non-existent case names, statutes, or legal authorities 
  • Distorted or misrepresented facts, quotations, holdings of cases, analysis, or standards 
  • Unsupported propositions of law 
  • Falsified information about court procedures or filing requirements 
  • Blended legal concepts or standards, such as from different laws, jurisdictions, or contexts 

Visit "A Legal Practitioner's Guide to AI and Hallucinations" to understand why this happens and identify ways to prevent it.

Best practices for safe use

Never trust, always verify

Check every citation, case, statute, rule, and claim.

Always maintain human judgment and expertise

Don't rely solely on AI tools and always question results.

Implement systemic best practices

Adopt protocols to institutionalize safe practices.

Understand specific tool limitations

Know what the AI legal tool can and cannot do.

Consider risk ratings for AI tools

Match verification effort to low-, medium-, and high-risk use cases.

Implement technical safeguards

Use multiple AI tools or built-in system checks.

Learn from mistakes

Endeavor to correct errors proactively.

FAQs

Gain additional insights

Our working groups are continually examining new and emerging trends and issues. Visit our complete list of resources to see their latest guidance.

Explore more