Skip to main content

Request for Information (RFI): AI Solution for Appellate Case Categorization – Fifth Court of Appeals, Texas

Introduction & background

The Court of Appeals for the Fifth District of Texas ("the Court") is seeking information from qualified solution providers regarding an AI-supported system to assist with case categorization into No Oral Argument (NORA) and Oral Argument (ORA) tracks.

This initiative is part of a larger judicial modernization effort. While the immediate scope is limited to NORA categorization, the Court envisions this as the foundation for broader AI-supported applications in the future. The Court intends to select a solution in the fall, enter into contract by the end of the calendar year, and have the solution deployed for training no later than May 31, 2026.

The Court emphasizes:

  • Judicial authority will remain with judges. AI will support administrative efficiency but will not replace judicial decision-making.
  • Human-in-the-loop oversight will be required for all AI outputs.
  • Compliance with Texas laws, including the Texas Responsible AI Governance Act (TRAIGA) and other applicable state and federal requirements, is mandatory.

Objectives of the RFI

  • Assess available AI solutions capable of reliably and accurately classifying appellate cases (NORA vs ORA).
  • Identify solution providers with relevant experience in AI, NLP, and judicial/public-sector technology.
  • Understand potential solution architectures, integration pathways, and data handling practices.
  • Gather information on solution provider approaches to responsible AI (explainability, transparency, fairness, bias mitigation, and accountability).

Functional requirements

Solution providers should provide information on how their solution would address:

  1. Case categorization
    1. Automated classification of appellate cases into NORA vs. ORA categories, based on historical case data and judicially relevant factors.
    2. Ability to refine and validate outputs against court-provided benchmarks.
  2. Memo generation
    1. Automated generation of case summaries to assist in judicial panel review.
    2. Customizable formats for summaries (length, organization, issue-spotting, case citations).
  3. Workflow support
    1. Role-based user interface for justices, general counsel, interns, and staff.
    2. Sandbox environment that provides a secure, isolated environment for review and testing of new models, version selection, prompts, configurations, and workflows without impacting or influencing the deployed environment.
    3. Tracking and audit trail of AI model source and version, AI input prompt(s), AI generated response and recommendations, human reviews, and final decisions.
    4. Tracking and audit trail of token usage and related cost tracking. Dashboard with ability to export detailed reports
  4. Human oversight
    1. Features enabling human review, edits, overrides, and final decision authority.
    2. Logging of all user interactions for accountability.

Technical requirements

  1. Deployment environment
    1. Must operate within Microsoft Azure Government Cloud.
    2. Web-based application accessible via court infrastructure.
    3. Initial scope excludes integration with the TAMES case management system (but should allow for potential future integration).
    4. Must support Single Sign-On (SSO) and integrate with the existing Active Directory (AD) environment for centralized authentication and account management.
  2. Data handling
    1. All documents will be redacted PDFs provided by the Court.
    2. Data security requirements: encryption at rest and in transit, role-based access, and audit logs.
    3. Compliance with Texas Identity Theft Enforcement and Protection Act, Texas Cybersecurity Act, and related statutes.
  3. Performance
    1. Support for at least 100 concurrent users.
    2. Availability target: 99.5% during business hours.
    3. Ability to scale with increasing case and user volume.

AI-specific requirements

Solution Providers must provide information on:

  • Transparency & explainability: Ability to document and explain model outputs, including training data sources, model design, and decision factors.
  • Bias & accuracy: Processes for validating accuracy, performing bias audits, and supporting contestability of outputs.
  • Model drift & updates: How updates or retraining are managed, disclosed, and tested. Additionally, solution providers should address:
    • Redundancy and resilience: Strategies to ensure continuity of service and consistent performance, including fallback mechanisms or redundant model configurations that can be activated in case of primary model failure or degradation.
    • Model portability: Ability to transition to alternative models or platforms in the event that the current model is depreciated or discontinued, including data portability, compatibility with other foundational models, and mitigation plans to avoid service disruption.
  • Risk mitigation: Safeguards against automation bias, confirmation bias, or undue reliance.
  • Compliance with TRAIGA: Confirm alignment with Texas prohibitions (e.g., no "social scoring," no AI outputs that infringe constitutional rights, oversight readiness).

Data ownership & intellectual property

  • The Court must retain full ownership of all data (documents, metadata, outputs, and derived data).
  • Solution Providers must disclose:
    • Use of data for training beyond Court purposes.
    • Any subcontractors or third-party components.
  • Intellectual property created specifically for the Court must be clearly licensed or assignable to the Court.

Security, privacy & compliance

  • Encryption standards (in transit and at rest).
  • Role-based access controls.
  • Compliance certifications (e.g., SOC 2, CJIS if applicable).
  • Incident response plan and breach notification procedures (within 24 hours).

Solution provider experience & qualifications

  • Prior experience with AI, prior work in judicial, legal, or public-sector environments is preferred but not mandatory. The Court welcomes responses from solution providers across industries who can demonstrate the technical capability and adaptability to meet the unique needs of a judicial setting.
  • Examples of relevant deployments (case studies, references).
  • Training and onboarding programs for judges, clerks, and IT staff.
  • Demonstrated commitment to ongoing support and development.

Cost transparency

Solution Providers must provide clear and complete pricing information. Responses should include:

  1. Development costs
    1. Provide a breakdown of one-time implementation and development costs.
    2. The Court requires a "Not to Exceed" ceiling on all development costs, to be negotiated at contract award.
  2. Operational costs
    1. Provide details on ongoing costs, including licensing, hosting, maintenance, and support.
    2. Solution Providers must specifically disclose any usage-based costs tied to AI processing (e.g., token usage, API calls, compute cycles, storage).
    3. The Court requires a "Not to Exceed" ceiling on all usage-based AI costs to ensure budget predictability.
  3. Transparency & breakdown
    1. Itemize costs for training, onboarding, and support.
    2. Identify optional services vs. required costs.
    3. Provide a 3–5 year total cost of ownership estimate.
  4. Flexibility
    1. Solution Providers should describe pricing model options (fixed-fee, subscription, usage-based) and confirm willingness to contract under a not-to-exceed framework.

Delivery

  • Solution provider must commit to entering into contract by end of calendar year.
  • Solution must be developed, tested, and available for deployment no later than May 31, 2026.
  • Solution provider must support training and refinement of solution between June 1 and Aug. 30, 2026.

Response format

Solution Providers should structure their responses as follows:

  1. Cover letter & company overview
  2. Solution description (functional & technical)
  3. AI-specific practices (explainability, bias, risk management)
  4. Security & compliance approach
  5. Data ownership, privacy & IP terms
  6. Solution Provider experience & references (with contact information)
  7. Implementation approach & timeline
  8. Cost model & pricing transparency
  9. Appendices (case studies, compliance attestations, architecture diagrams)

RFI timeline

  • RFI release: Mid-September 2025
  • Solution Provider virtual information: Sept. 29, 2025 from 10-11 a.m. CST
  • Solution Provider questions due: Oct. 3, 2025
  • Solution Provider responses due: Oct. 17, 2025
  • Evaluation period: Oct. 17–24, 2025 (note that as part of the evaluation period solution providers may be asked to participate in a virtual product overview/demonstration)
  • Selection Announcement: On or after Oct. 27, 2025

Optional pre-submission information session

The Court will host a virtual information session for interested solution providers on Monday, Sept. 29, 2025, from 10-11 a.m. Central Time. This session will provide an opportunity to ask questions and receive clarifications prior to the submission deadline. Attendance is optional but encouraged.

Register for the session

Registered participants will receive a Zoom meeting invitation upon confirmation. A summary of all questions and answers from the session will be published after Oct. 3, 2025.

Submission instructions

Responses should be submitted electronically in PDF format by Oct. 17, 2025, at 5 p.m. CT to Michael Navin at mnavin@ncsc.org.

Questions that are due by Oct. 3, 2025 can be sent to the e-mail address listed above.

Acknowledgement of receipt of RFI will be provided within one business day. If you do not receive acknowledgement of receipt, check junk/spam folder and/or follow-up with phone call to: 313.587.8361

Note: This RFI is for informational purposes only. It is not a solicitation or guarantee of future procurement. The Court may use the information received to develop a Request for Proposal (RFP) or other procurement activity at its discretion.

Download the RFI announcement