tbench

0
0
Source

Terminal-Bench integration for Mux agent benchmarking and failure analysis

Install

mkdir -p .claude/skills/tbench && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8589" && unzip -o skill.zip -d .claude/skills/tbench && rm skill.zip

Installs to .claude/skills/tbench

About this skill

Terminal-Bench Integration

This directory contains the mux agent adapter for Terminal-Bench 2.0, using Harbor as the evaluation harness.

Quick Start

When user asks to run a tbench, generally assume they mean in CI via workflow_dispatch.

# Run full benchmark suite
make benchmark-terminal

# Run specific tasks
make benchmark-terminal TB_TASK_NAMES="hello-world chess-best-move"

# Run with specific model
make benchmark-terminal TB_ARGS="--agent-kwarg model_name=anthropic/claude-opus-4-5"

# Run on Daytona cloud (high parallelism)
TB_ENV=daytona TB_CONCURRENCY=48 make benchmark-terminal

Daytona Cloud Sandboxes

For faster benchmarks, use Daytona cloud sandboxes instead of local Docker:

# Set API key (get from https://app.daytona.io)
export DAYTONA_API_KEY="your-api-key"

# Run with 48 concurrent cloud sandboxes (~6x faster than local)
make benchmark-terminal TB_ENV=daytona TB_CONCURRENCY=48

# Run specific tasks on Daytona
make benchmark-terminal TB_ENV=daytona TB_CONCURRENCY=48 TB_TASK_NAMES="chess-best-move stockfish-elo"

Account limits (Tier 3): Pool of 250 vCPU / 500GB RAM. Most tasks require 1 vCPU / 2GB RAM, with a few needing up to 4 vCPU / 8GB RAM. Harbor automatically requests the correct per-task resources.

Speed comparison:

EnvironmentConcurrencyFull suite time
Local Docker4~90 min
Daytona Cloud48~10-15 min

Configuration

Environment Variables

  • TB_DATASET: Dataset to use (default: terminal-bench@2.0)
  • TB_CONCURRENCY: Number of concurrent tasks (default: 4)
  • TB_TIMEOUT: Global timeout in seconds (default: 1800 = 30 minutes)
  • TB_ENV: Environment to run in (local or daytona)
  • TB_TASK_NAMES: Space-separated task names to run (default: all tasks)
  • TB_ARGS: Additional arguments passed to harbor
  • MUX_RUN_ARGS: CLI flags passed directly to mux run inside the container (e.g., --thinking high --use-1m --budget 5.00). This is the primary mechanism for all mux run flags — avoids per-flag plumbing.

Timeout Handling

The benchmark uses a global timeout applied to all tasks. The default is 30 minutes (1800 seconds), which provides sufficient time for most tasks while catching genuinely stuck agents.

Design Rationale:

Based on analysis of Oct 30, 2025 nightly runs:

  • Longest successful task: blind-maze-explorer-algorithm.hard at 20 minutes
  • 95th percentile: ~15 minutes
  • Mean duration: ~6 minutes

The 30-minute default provides comfortable headroom for complex tasks without excessive wait times for failed attempts.

Override timeout:

# Run with 60 minute timeout for very complex tasks
TB_TIMEOUT=3600 make benchmark-terminal

# Run with shorter 10 minute timeout for quick iteration
TB_TIMEOUT=600 make benchmark-terminal TB_SAMPLE_SIZE=5

Note: We prefer global timeout defaults over per-task configuration to avoid complexity and maintenance burden. If you find tasks consistently timing out, increase TB_TIMEOUT rather than adding per-task configuration.

Agent Configuration

The agent adapter accepts a few Harbor kwargs (passed via --agent-kwarg):

  • model_name: Model to use (e.g., anthropic/claude-sonnet-4-5, openai/gpt-5-codex)
  • experiments: Experiments to enable, comma-separated (e.g., programmatic-tool-calling)

All other mux run CLI flags (thinking level, mode, runtime, budget, etc.) are passed via MUX_RUN_ARGS — no per-flag plumbing needed.

CI dispatch (primary method):

# Run with model, thinking, and 1M context
gh workflow run terminal-bench.yml \
  -f model_name=anthropic/claude-opus-4-6 \
  -f mux_run_args="--thinking xhigh --use-1m"

# Run with budget cap
gh workflow run terminal-bench.yml \
  -f model_name=anthropic/claude-opus-4-6 \
  -f mux_run_args="--thinking high --budget 5.00"

Local runs:

# Pass flags via MUX_RUN_ARGS env var
MUX_RUN_ARGS="--thinking high --use-1m" make benchmark-terminal

# Model and experiments via TB_ARGS
make benchmark-terminal TB_ARGS="--agent-kwarg model_name=openai/gpt-5-codex --agent-kwarg experiments=programmatic-tool-calling"

Results

Results are saved to runs/YYYY-MM-DD__HH-MM-SS/:

  • results.json: Aggregate results with pass/fail rates
  • run_metadata.json: Run configuration and metadata
  • <task-id>/: Per-task directories containing:
    • sessions/agent.log: Full agent execution log
    • sessions/agent.cast: Asciinema recording of agent session
    • sessions/tests.log: Test execution output
    • results.json: Per-trial results

CI/CD Integration

Querying Results from BigQuery

Mux Terminal-Bench results are uploaded to BigQuery after CI runs. Query via bq CLI after authenticating with gcloud auth login and setting project to mux-benchmarks.

Table: mux-benchmarks.benchmarks.tbench_results

Schema: run_id (STRING), task_id (STRING), model_name (STRING), thinking_level (STRING: off/low/medium/high), mode (STRING: plan/exec), dataset (STRING), experiments (STRING), passed (BOOL), score (FLOAT), n_input_tokens (INT), n_output_tokens (INT), github_run_id (INT), github_sha (STRING), ingested_at (TIMESTAMP).

See .github/workflows/terminal-bench.yml and .github/workflows/nightly-terminal-bench.yml for GitHub Actions integration.

Nightly workflow runs both Claude and GPT models on the full task suite, uploading results as artifacts.

Leaderboard Submission

To submit mux results to the Terminal-Bench 2.0 leaderboard:

Step 1: Prepare Submission

The leaderboard computes pass@k from multiple attempts per task. Provide multiple runs so each becomes its own job folder inside the submission.

# Download latest 5 successful nightly runs (recommended for submission)
python3 benchmarks/terminal_bench/prepare_leaderboard_submission.py --n-runs 5

# Use specific run IDs (each becomes a separate job folder)
python3 benchmarks/terminal_bench/prepare_leaderboard_submission.py --run-id 111 222 333 444 555

# Use multiple existing artifact directories
python3 benchmarks/terminal_bench/prepare_leaderboard_submission.py --artifacts-dir ./run1 ./run2

# Download latest single run (quick iteration)
python3 benchmarks/terminal_bench/prepare_leaderboard_submission.py

# Only prepare specific models
python3 benchmarks/terminal_bench/prepare_leaderboard_submission.py --n-runs 5 --models anthropic/claude-opus-4-5

This creates a properly structured submission folder at leaderboard_submission/ containing:

submissions/terminal-bench/2.0/Mux__<model>/
  metadata.yaml       # Agent and model info
  <job-folder-1>/     # Results from run 1
    config.json
    result.json
    <trial-1>/
      config.json
      result.json
      agent/
      verifier/
    ...
  <job-folder-2>/     # Results from run 2
    ...

Step 2: Submit via HuggingFace Python API

The hf upload CLI tends to timeout on large submissions due to LFS file handling. Use the Python API with an extended timeout instead:

# Install huggingface_hub (via uv or pip)
pip install huggingface_hub

# Authenticate (one-time setup)
hf auth login
import httpx
from huggingface_hub import HfApi
from huggingface_hub.utils import configure_http_backend

configure_http_backend(
    backend_factory=lambda: httpx.Client(timeout=httpx.Timeout(300.0, connect=60.0))
)

api = HfApi()
api.upload_folder(
    repo_id="alexgshaw/terminal-bench-2-leaderboard",
    folder_path="./leaderboard_submission/submissions",
    path_in_repo="submissions",
    repo_type="dataset",
    create_pr=True,
    commit_message="Add Mux + <Model> submission",
    commit_description="- Agent: Mux (Coder)\n- Model: <model>\n- <N> tasks × <K> attempts",
)

The PR will be automatically validated by the leaderboard bot. Once merged, results appear on the leaderboard.

Tips from past submissions:

  • The prepare script already strips *.log files (they trigger HF LFS and cause timeouts)
  • --artifacts-dir accepts raw job folders directly (e.g., an extracted tarball root)
  • To update an existing PR, pass revision="refs/pr/<N>" instead of create_pr=True
  • To remove stale files from a PR, use api.delete_folder(..., revision="refs/pr/<N>")

Files

  • mux_agent.py: Main agent adapter implementing Harbor's BaseInstalledAgent interface
  • mux-run.sh: Shell script that sets up environment and invokes mux CLI
  • mux_payload.py: Helper to package mux app for containerized execution
  • mux_setup.sh.j2: Jinja2 template for agent installation script
  • prepare_leaderboard_submission.py: Script to prepare results for leaderboard submission
  • analyze_failure_rates.py: Analyze failure rates to find optimization opportunities
  • download_run_logs.py: Download and inspect raw agent logs from nightly runs

Comparative Failure Analysis Workflow

When investigating why Mux fails on a task more than other agents, consider this workflow:

1. Identify High-Priority Failures

# Find tasks where Mux underperforms (high M/O ratio = Mux fails more than others)
python benchmarks/terminal_bench/analyze_failure_rates.py --top 20

2. Check BigQuery for Failure Patterns

# Authenticate and set project
gcloud auth login && gcloud config set project mux-benchmarks

# Query pass/fail by model for specific task (strip __hash suffix mentally)
bq query --use_legacy_sql=false '
SELECT model_name, passed, COUNT(*) as runs
FROM `mux-benchmarks.benchmarks.tbench_results`
WHERE REGEXP_REPLACE(task_id, r"__[a-zA-Z0-9]+$", "") = "TASK_NAME_HERE"
  AND github_workflow = "Nightly Terminal-Bench"
  AND passed IS NOT NULL
GROUP BY model_name, passed
ORDER BY model_name, passed
'

3. Download and Inspect Agent Logs

# List recent nightly runs
python

---

*Content truncated.*

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,1421,171

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

969933

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

683829

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

691549

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

797540

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

697374

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.