LLabs Connect is LitigationLabs’ internal platform. It houses the tools and systems that the team uses to build, evaluate, and operate the product — from the eval dashboard and prompt editor to agent configuration and transcript review.Documentation Index
Fetch the complete documentation index at: https://docs.litigationlabs.io/llms.txt
Use this file to discover all available pages before exploring further.
What’s Inside
- Eval Dashboard — Run automated and human evaluation batches, track agent quality metrics, and generate RAG embeddings for agent improvement.
- Transcript Viewer — Browse courtroom session transcripts with user filtering, custom titles, and prompt version tracking.
- Prompt Editor — Edit and version agent prompts with diff viewing, draft/active/archived status, and hot-reload to production.
- Agent Configuration — Tune model overrides, temperature, intentional error rates, and objection type whitelists per agent.
- Embedding Atlas — Visualize evaluation ratings in 2D semantic space to spot quality patterns and outliers.