Documentation Index
Fetch the complete documentation index at: https://docs.litigationlabs.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Agent prompts in LitigationLabs are managed through Payload CMS and deployed dynamically at runtime — no code changes required. Every prompt execution is captured in Langfuse, giving you full visibility into what each agent received and how it responded.Where Prompts Live
Payload CMS (agentConfigs collection)
Each agent has a configuration record in Payload that defines its behavior:
| Field | Purpose |
|---|---|
name | Agent identifier (e.g., oca, judge, witness) |
systemPrompt | The base prompt template |
parameters.modelOverride | Which model to use for this agent |
parameters.temperature | Creativity/randomness setting |
parameters.intentionalErrorRate | How often OCA makes pedagogical mistakes |
parameters.objectionTypes | Whitelist of objection types the agent can use |
Dynamic Prompt Construction
The base prompt from Payload is only part of what the agent sees. At runtime, prompt builders inlib/courtroom/prompts.ts inject dynamic context:
- Scenario metadata — Case title, phase, player side, OCA side.
- Witness data — Name, role, affidavit text, demeanor cues, elicits with weights.
- Transcript history — Recent events (last 6) for conversational memory.
- JSON schema instructions — Output format requirements per agent mode.
- Error rate instructions — What percentage of turns should include intentional mistakes.
Versioning and Deployment
Hot Reload
Agent configs are fetched with a 1-minute cache. When you edit a prompt in the Payload admin panel, the change takes effect within 60 seconds across all active sessions — no deployment needed.Prompt Editor
The internal Prompt Editor tool provides:- Diff viewing — See exactly what changed between prompt versions.
- Status tracking — Draft, Active, or Archived status per version.
- Theme-aware syntax highlighting — Readable in both light and dark mode.
- Version history — Review and roll back to any previous version.
Deployment Flow
- Edit the prompt in Payload’s admin panel or the Prompt Editor.
- The change is stored in the database immediately.
- Within 1 minute, all new agent calls pick up the updated prompt.
- Every execution of the new prompt is traced in Langfuse with the full prompt text visible in the generation span.
Visibility in Langfuse
Every generation span in Langfuse shows the complete prompt that was sent to the model — both the static base from Payload and the dynamic context injected at runtime. This means you can:- Audit any response — Read the exact instructions the agent received for a specific turn.
- Compare before and after — Find traces from before and after a prompt change to see the behavioral difference.
- Debug unexpected behavior — If an agent acts strangely, check whether the prompt context was constructed correctly.
Prompt Tuning
The prompt tuning system uses AI to suggest improvements based on low-rated responses:- Select an agent config and gather examples of poorly-rated responses.
- The system sends the current prompt alongside the bad examples to a high-reasoning model.
- It returns a suggested revision that addresses the identified failure patterns.
- An admin reviews the suggestion and decides whether to apply it.
prompt-tuning/generate in Langfuse, so you can review the reasoning behind each suggested change.