Skip to main content

AI evidence in jury trials: Authenticity, admissibility, and roles of the court and juries

1:00 pm – 2:00 pm ET

As artificial intelligence (AI) technologies become more prevalent, state courts are increasingly encountering AI-generated evidence, such as deepfakes, reconstructed videos, and synthetic documents. Join the TRI/NCSC AI Policy Consortium to explore the evidentiary challenges posed by both acknowledged and unacknowledged AI-generated evidence in jury trials. Panelists will examine existing legal frameworks for authentication and admissibility and provide practical guidance for judges and attorneys. Special attention will be paid to the unique responsibilities of courts and juries in evaluating the authenticity and reliability of AI-generated evidence, as well as the potential need for updated jury instructions and judicial guidance.

Learning objectives:

1. Identify the distinctions between acknowledged (disclosed) and unacknowledged (potentially deceptive) AI-generated evidence and recognize examples (e.g., deepfake audio/video, synthetic documents) that may be presented to a jury. 

2. Understand the existing rules of evidence regarding authentication (e.g., FRE 901, 902, 104) and explain how these rules apply to AI-generated evidence, including the role of extrinsic evidence and the allocation of responsibility between judges and juries.

3. Assess the challenges and risks AI evidence presents to juries.

Register today

Moderator: 

  • Megan Carpenter, dean and professor of law, University of New Hampshire Franklin Pierce School of Law 

Panelists:

  • Maura R. Grossman, research professor and eDiscovery lawyer/consultant/expert/special master
  • Jawwaad Johnson, director of the Center of Jury Studies and principal court management consultant, NCSC
  • Judge Erica Yew, Santa Clara County (Calif.) Superior Court

For more information, contact Keeley Daye. 

TRI/NCSC AI Policy Consortium

An intensive examination of the impact of technologies such as generative AI (GenAI), large language models, and other emerging, and yet-to-be developed tools.