trulens-running-evaluations
Execute TruLens evaluations and view results
Install
mkdir -p .claude/skills/trulens-running-evaluations && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2367" && unzip -o skill.zip -d .claude/skills/trulens-running-evaluations && rm skill.zipInstalls to .claude/skills/trulens-running-evaluations
About this skill
TruLens Running Evaluations
Execute your configured evaluations and analyze results.
Prerequisites
Before running evaluations, ensure you have:
- Instrumented your app (see
instrumentationskill) - Configured your feedback functions (see
evaluation-setupskill)
Instructions
Step 1: Wrap Your App with Feedbacks
Pass your configured feedbacks to the appropriate wrapper:
from trulens.core import TruSession
session = TruSession()
# Use the wrapper that matches your framework
tru_app = YourWrapper(
your_app,
app_name="MyApp",
app_version="v1",
feedbacks=your_feedbacks, # From evaluation-setup
)
| Framework | Wrapper |
|---|---|
| LangChain | TruChain |
| LangGraph | TruGraph |
| LlamaIndex | TruLlama / TruLlamaWorkflow |
| Custom | TruApp |
Step 2: Run Your App with Recording
Use the context manager to record traces and run evaluations:
# Single query
with tru_app as recording:
result = your_app.query("What is TruLens?")
# Multiple queries
test_queries = [
"What is machine learning?",
"How does RAG work?",
"Explain transformers.",
]
with tru_app as recording:
for query in test_queries:
your_app.query(query)
Step 3: Wait for and View Results
Evaluations run asynchronously. Use retrieve_feedback_results() to wait for them to complete:
# Wait for evaluations to complete and get results as a DataFrame
# The timeout parameter controls how long to wait (default: 180 seconds)
feedback_results = recording.retrieve_feedback_results(timeout=300)
print(feedback_results)
# For a single record:
single_record_results = recording[0].retrieve_feedback_results(timeout=300)
# View leaderboard summary across all records
print(session.get_leaderboard())
# Launch interactive dashboard
from trulens.dashboard import run_dashboard
run_dashboard(session)
Important: Do NOT use time.sleep() to wait for evaluations. The retrieve_feedback_results() method properly waits for:
- Records to be written to the database
- Feedback evaluations to complete
- Results to be available
Common Patterns
Comparing App Versions
# Version A
tru_v1 = TruLlama(query_engine_v1, app_name="MyRAG", app_version="v1", feedbacks=feedbacks)
with tru_v1 as recording:
for q in test_queries:
query_engine_v1.query(q)
# Version B
tru_v2 = TruLlama(query_engine_v2, app_name="MyRAG", app_version="v2", feedbacks=feedbacks)
with tru_v2 as recording:
for q in test_queries:
query_engine_v2.query(q)
# Compare on leaderboard (same app_name, different app_version)
print(session.get_leaderboard())
Batch Evaluation with Test Dataset
import pandas as pd
# Load test dataset
test_df = pd.read_csv("test_queries.csv")
with tru_app as recording:
for _, row in test_df.iterrows():
result = your_app.query(row["query"])
# Optionally store results
# results.append({"query": row["query"], "response": result})
Evaluating with Ground Truth
from trulens.feedback import GroundTruthAgreement
# Load ground truth dataset (see dataset-curation skill)
ground_truth_df = session.get_ground_truth("my_dataset")
# Add ground truth feedback
ground_truth = GroundTruthAgreement(ground_truth_df, provider=provider)
f_agreement = Feedback(ground_truth.agreement_measure, name="Ground Truth Agreement").on_input_output()
# Include with other feedbacks
all_feedbacks = your_feedbacks + [f_agreement]
Troubleshooting
| Issue | Solution |
|---|---|
| No evaluation results | Ensure feedbacks list is passed to wrapper |
| Missing context scores | Verify RETRIEVAL.RETRIEVED_CONTEXTS is instrumented |
| Agent metrics empty | Check that trace contains tool calls and reasoning |
| Dashboard not loading | Run pip install trulens-dashboard, check port 8501 |
| Feedback columns empty | Your root span must use SpanType.RECORD_ROOT for .on_input()/.on_output() to work. Use framework wrappers (TruGraph, TruChain) which handle this automatically |
PydanticForbiddenQualifier error | Update to latest TruLens version - this error occurs with Deep Agents/LangGraph apps that use NotRequired type annotations |
| Results not appearing | Use recording.retrieve_feedback_results() instead of time.sleep() - it properly waits for evaluations to complete |
Deep Agents / LangGraph Specific Issues
If evaluating a Deep Agent or LangGraph app:
-
Use
TruGraphinstead ofTruApp+ manual instrumentation:from trulens.apps.langgraph import TruGraph tru_agent = TruGraph(agent, app_name="DeepAgent", feedbacks=[...]) -
Why? TruGraph automatically:
- Creates
RECORD_ROOTspans (required for.on_input()/.on_output()) - Captures all graph nodes and transitions
- Handles LangGraph-specific data structures
- Creates
-
Common mistake: Using
@instrument(span_type=SpanType.AGENT)instead ofRECORD_ROOTwill cause feedback selector shortcuts to fail silently
More by truera
View all →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
rust-coding-skill
UtakataKyosui
Guides Claude in writing idiomatic, efficient, well-structured Rust code using proper data modeling, traits, impl organization, macros, and build-speed best practices.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.