trulens-evaluation-workflow
Systematically evaluate your LLM application with TruLens
Install
mkdir -p .claude/skills/trulens-evaluation-workflow && curl -L -o skill.zip "https://mcp.directory/api/skills/download/1124" && unzip -o skill.zip -d .claude/skills/trulens-evaluation-workflow && rm skill.zipInstalls to .claude/skills/trulens-evaluation-workflow
About this skill
TruLens Evaluation Workflow
A systematic approach to evaluating your LLM application.
When to Use This Skill
Use this skill when you want to:
- Set up comprehensive evaluation for a new LLM app
- Improve an existing app's evaluation coverage
- Understand the full TruLens workflow
- Know which sub-skill to use for your current task
Required Questions to Ask User
Before implementing, always ask the user these questions:
1. App Type (determines instrumentation wrapper)
- What framework is your app built with? (LangChain, LangGraph/Deep Agents, LlamaIndex, Custom)
2. Evaluation Metrics (determines feedback functions)
Ask: "Which evaluation metrics would you like to use?"
| App Type | Recommended Metrics | Description |
|---|---|---|
| RAG | RAG Triad | Context Relevance, Groundedness, Answer Relevance |
| Agent | Agent GPA | Tool Selection, Tool Calling, Execution Efficiency, etc. |
| Simple | Answer Relevance | Basic input-to-output relevance check |
| Custom | Ask user | Let user describe what they want to evaluate |
For Agents, also ask:
- Does your agent do explicit planning? (determines if Plan Quality/Adherence metrics apply)
3. Additional Metrics (optional)
- Do you want any additional evaluations? (Coherence, Conciseness, Harmlessness, custom metrics)
The Evaluation Workflow
┌─────────────────────────────────────────────────────────────────┐
│ TruLens Evaluation Workflow │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. INSTRUMENT 2. CURATE 3. CONFIGURE │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Capture data │ → │ Build test │ → │ Choose │ │
│ │ from your │ │ datasets │ │ metrics │ │
│ │ app │ │ │ │ │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ↓ ↓ │
│ └─────────────────────┬─────────────────────┘ │
│ ↓ │
│ 4. RUN & ANALYZE │
│ ┌──────────────┐ │
│ │ Execute evals│ │
│ │ & iterate │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Sub-Skills Reference
| Step | Skill | When to Use |
|---|---|---|
| 1. Instrument | instrumentation/ | Setting up a new app, adding custom spans, capturing specific data for evals |
| 2. Curate | dataset-curation/ | Creating test datasets, storing ground truth, ingesting external logs |
| 3. Configure | evaluation-setup/ | Choosing metrics (RAG triad vs Agent GPA), setting up feedback functions |
| 4. Run | running-evaluations/ | Executing evaluations, viewing results, comparing versions |
Interactive Workflow Guide
Answer these questions to find where to start:
Where are you in the process?
"I have a new LLM app that isn't instrumented yet"
→ Start with instrumentation/ skill
"My app is instrumented but I don't have test data"
→ Go to dataset-curation/ skill
"I have data but haven't set up evaluations"
→ Go to evaluation-setup/ skill
"Everything is set up, I just need to run evals"
→ Go to running-evaluations/ skill
What's your immediate goal?
"I want to see traces of my app's execution"
→ Use instrumentation/ - capture spans and view in dashboard
"I want to evaluate my RAG's retrieval quality"
→ Use evaluation-setup/ - configure RAG Triad metrics
"I want to evaluate my agent's tool usage"
→ Use evaluation-setup/ - configure Agent GPA metrics
"I want to compare two versions of my app"
→ Use running-evaluations/ - version comparison pattern
"I want to evaluate against known correct answers"
→ Use dataset-curation/ - create ground truth dataset
Quick Start Paths
Path A: Evaluate a RAG App
- Instrument → Wrap with
TruLlamaorTruChain - Configure → Set up RAG Triad (context relevance, groundedness, answer relevance)
- Run → Execute queries and view leaderboard
Path B: Evaluate an Agent
- Instrument → Wrap with
TruGraph(for LangGraph/Deep Agents) - Configure → Set up Agent GPA metrics (or Answer Relevance for simple evals)
- Run → Execute tasks and analyze traces
Note: For LangGraph-based frameworks like Deep Agents, always use TruGraph rather than manual @instrument() decorators. TruGraph automatically creates the correct span types and captures all graph transitions.
Path C: Regression Testing
- Curate → Create ground truth test dataset
- Configure → Add ground truth agreement metric
- Run → Compare versions against test set
Path D: Production Monitoring
- Instrument → Add custom attributes for key data
- Configure → Set up metrics for production concerns
- Run → Continuously evaluate production traffic
Common Questions
"Do I need to use all four skills?" No. Instrumentation and evaluation-setup are essential. Dataset-curation is optional (for ground truth comparisons). Running-evaluations is needed to execute and view results.
"What order should I use them?" Generally: Instrument → (optionally) Curate → Configure → Run. But you can revisit any step as needed.
"Can I add more evaluations later?" Yes. You can always add new feedback functions and re-run evaluations on existing traces.
"How do I know if my app is a RAG or Agent?"
- RAG: Retrieves documents/context, generates grounded responses
- Agent: Uses tools, makes decisions, may involve planning
If your app does both (e.g., agentic RAG), use metrics from both categories.
Getting Help
If you're unsure which skill to use, describe your goal and I'll guide you to the right one.
Known Compatibility Notes
Deep Agents / LangGraph
- Always use
TruGraphfor LangGraph-based apps (including Deep Agents) - The
.on_input()and.on_output()feedback shortcuts requireRECORD_ROOTspans - Framework wrappers (TruGraph, TruChain) create these automatically
- Manual
@instrument(span_type=SpanType.AGENT)will NOT work with selector shortcuts
Pydantic Compatibility
Some LangGraph/Deep Agents versions use NotRequired type annotations that older Pydantic versions can't handle. If you see PydanticForbiddenQualifier errors, update to the latest TruLens version.
More by truera
View all →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
rust-coding-skill
UtakataKyosui
Guides Claude in writing idiomatic, efficient, well-structured Rust code using proper data modeling, traits, impl organization, macros, and build-speed best practices.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.