Principles & practices for AI use in courts
A guide for responsible use of generative AI
Generative AI is reshaping the justice system, offering tools for text generation, document analysis, and task automation. This guide from the Thomson Reuters Institute/NCSC AI Policy Consortium for Law and Courts outlines ethical principles for responsible AI use, emphasizing the need for judges, court administrators, and legal professionals to use technology correctly.
Who should read this?
- Judges & court administrators: Understand AI implementation and learn more about ethical considerations and potential risks
- IT leaders & policymakers: Develop AI standards, ethical frameworks, and risk mitigation strategies
- Court staff & legal professionals: Learn about AI's potential benefits and how your court can use it ethically and responsibly
Why this guide matters
Generative AI (GenAI) is transforming how courts work, but it is a tool that must be used with care. A clear understanding of the risks, limitations, and ethical best practices is vital for unlocking the vast potential of AI in modernizing your court.
Strategies for implementing AI
Start small
Begin with a measured approach that mitigates risk to core court functions, and then gradually expand AI use with regular evaluations.
Set clear policies
Your court should have written policies that define acceptable uses for AI and how to respond if things go wrong.
Find the why
Ensure that AI is used properly to solve problems, and make sure that all risks have been considered and accounted for.
Conduct regular reviews
The rapid evolution of AI demands regular checkups to make sure AI aligns with your court's values and objectives.
Key foundations for the ethical use of AI
The benefits of integrating AI into your court's daily operations come with the responsibility of ensuring the technology is used ethically. The level of human oversight required depends on the specific use, with minimal risk use requiring a "human-on-the-loop" to monitor processes and outcomes. High-risk AI use requires a "human-in-the-loop" to be actively involved in training and guidance and provide direct oversight and intervention when needed.
Use as a valuable assistant
AI can be a valuable assistant, but it should never replace human judgment and should be used with human supervision.
Review for accuracy
All AI-generated content should be reviewed for accuracy.
Safeguard sensitive data
Your AI tools must safeguard sensitive data, comply with security protocols, and never compromise confidentiality.
Be transparent
Your use of AI should be transparent to the public, and clear records should be kept.
Conduct regular evaluations
Regular evaluations are vital to detect the potential for bias in your AI, especially when it comes to high-stakes legal decisions.
Ongoing education
All court staff need ongoing education to keep up with changes in AI capabilities and risks to ensure ethical use.
Prevent plagiarism
Any content generated by AI should be reviewed to prevent unintentional plagiarism.
Balancing the risk of AI in your court
Your court's AI use should consider four key risk areas, each of which demands a different level of human involvement.
Minimal risk
This covers routine AI tools such as those used for word processing and needs supervisory oversight from a human-on-the-loop.
Moderate risk
Tasks such as having AI draft opinions or conduct research requires verification and quality checks from a human-in-the-loop.
High risk
Any AI output affecting legal rights, such as risk predictions, demands significant review and decision-making from a human-in-the-loop.
Unacceptable risk
AI should never be used to automate decisions on life, incarceration, family, or housing matters.
Understanding common risk areas
Our experts have identified the three top risk areas that your court should focus on when using AI. Read our complete set of principles to discover the risks.
TRI/NCSC AI Policy Consortium
An intensive examination of the impact of technologies such as generative AI (GenAI), large language models, and other emerging, and yet-to-be developed tools.