A legal practitioner’s guide to AI & hallucinations
AI tools are rapidly transforming legal practice with promises of efficiency and cost savings. But as adoption accelerates, we're learning these tools are both powerful and prone to significant errors, including hallucinations.
AI hallucinations occur when legal AI tools generate fabricated case citations, distorted holdings, or false procedural information that appears authentic but doesn't exist or is factually incorrect.
Who should read this
- Attorneys
- Paralegals
- Judicial Officers
- Authorized justice practitioners
Why this guidance matters
AI tools are transforming legal work with the ability to scan millions of cases, statutes, and regulations in seconds. These systems use machine learning, natural language processing, and large language models trained on vast legal datasets to "understand" legal terminology and concepts within their specific domains, and provide insights, identify relationships, and generate content requested by a user.
Beyond serving legal professionals, AI is expanding access to legal help for people navigating the legal system without an attorney. Chatbots and virtual assistants can prepare legal materials and assist with governmental filings, making verification of AI outputs even more critical.
Using AI carries both responsibilities and risks for legal professionals, who may be tempted to overrely on AI output without adequate verification.
This guidance helps attorneys and other legal practitioners understand how generative AI works, what it does and does not do well, and how to use it responsibly.
Application in legal products
- Document analysis and review
- Legal research
- Predictive analytics
- Contract lifecycle management
- E-filing automation
- Self-help chatbot
Challenges & hallucinations
LLM's predictive nature generates text that sounds right rather than text that is right. This creates hallucinations that can be dangerously convincing.
Hallucinations can appear as:
- Fabricated non-existent case names, statutes, or legal authorities
- Distorted or misrepresented facts, quotations, holdings of cases, analysis, or standards
- Unsupported propositions of law
- Falsified information about court procedures or filing requirements
- Blended legal concepts or standards, such as from different laws, jurisdictions, or contexts
Visit "A Legal Practitioner's Guide to AI and Hallucinations" to understand why this happens and identify ways to prevent it.
Best practices for safe use
Never trust, always verify
Check every citation, case, statute, rule, and claim.
Always maintain human judgment and expertise
Don't rely solely on AI tools and always question results.
Implement systemic best practices
Adopt protocols to institutionalize safe practices.
Understand specific tool limitations
Know what the AI legal tool can and cannot do.
Consider risk ratings for AI tools
Match verification effort to low-, medium-, and high-risk use cases.
Implement technical safeguards
Use multiple AI tools or built-in system checks.
Learn from mistakes
Endeavor to correct errors proactively.
FAQs
Gain additional insights
Our working groups are continually examining new and emerging trends and issues. Visit our complete list of resources to see their latest guidance.
TRI/NCSC AI Policy Consortium for Law & Courts
An intensive examination of the impact of technologies such as generative AI (GenAI), large language models, and other emerging, and yet-to-be developed tools.
Explore more
Principles & practices for AI use in courts
The National Center for State Courts (NCSC) provides various resources and guides for court management, ranging from implementing AI and cybersecurity basics to improving civil justice accessibility and expediting family cases through the use of triage tools.
AI & the courts: Judicial and legal ethics issues
The article explores the intersection between artificial intelligence and the judiciary, discussing judicial and legal ethics issues such as courtroom technology, the role of artificial intelligence in reshaping the future of courts, and the implications of new developments for legal professionals and court operations.
AI foundations in the courts
Here are some practical insights for building a solid foundation for responsible AI use in your court, encompassing privacy, bias, ethics, and accessibility.