examples-auto-run

3
0
Source

Run python examples in auto mode with logging, rerun helpers, and background control.

Install

mkdir -p .claude/skills/examples-auto-run && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2591" && unzip -o skill.zip -d .claude/skills/examples-auto-run && rm skill.zip

Installs to .claude/skills/examples-auto-run

About this skill

examples-auto-run

What it does

  • Runs uv run examples/run_examples.py with:
    • EXAMPLES_INTERACTIVE_MODE=auto (auto-input/auto-approve).
    • Per-example logs under .tmp/examples-start-logs/.
    • Main summary log path passed via --main-log (also under .tmp/examples-start-logs/).
    • Generates a rerun list of failures at .tmp/examples-rerun.txt when --write-rerun is set.
  • Provides start/stop/status/logs/tail/collect/rerun helpers via run.sh.
  • Background option keeps the process running with a pidfile; stop cleans it up.

Usage

# Start (auto mode; interactive included by default)
.agents/skills/examples-auto-run/scripts/run.sh start [extra args to run_examples.py]
# Examples:
.agents/skills/examples-auto-run/scripts/run.sh start --filter basic
.agents/skills/examples-auto-run/scripts/run.sh start --include-server --include-audio

# Check status
.agents/skills/examples-auto-run/scripts/run.sh status

# Stop running job
.agents/skills/examples-auto-run/scripts/run.sh stop

# List logs
.agents/skills/examples-auto-run/scripts/run.sh logs

# Tail latest log (or specify one)
.agents/skills/examples-auto-run/scripts/run.sh tail
.agents/skills/examples-auto-run/scripts/run.sh tail main_20260113-123000.log

# Collect rerun list from a main log (defaults to latest main_*.log)
.agents/skills/examples-auto-run/scripts/run.sh collect

# Rerun only failed entries from rerun file (auto mode)
.agents/skills/examples-auto-run/scripts/run.sh rerun

Defaults (overridable via env)

  • EXAMPLES_INTERACTIVE_MODE=auto
  • EXAMPLES_INCLUDE_INTERACTIVE=1
  • EXAMPLES_INCLUDE_SERVER=0
  • EXAMPLES_INCLUDE_AUDIO=0
  • EXAMPLES_INCLUDE_EXTERNAL=0
  • Auto-approvals in auto mode: APPLY_PATCH_AUTO_APPROVE=1, SHELL_AUTO_APPROVE=1, AUTO_APPROVE_MCP=1

Log locations

  • Main logs: .tmp/examples-start-logs/main_*.log
  • Per-example logs (from run_examples.py): .tmp/examples-start-logs/<module_path>.log
  • Rerun list: .tmp/examples-rerun.txt
  • Stdout logs: .tmp/examples-start-logs/stdout_*.log

Notes

  • The runner delegates to uv run examples/run_examples.py, which already writes per-example logs and supports --collect, --rerun-file, and --print-auto-skip.
  • start uses --write-rerun so failures are captured automatically.
  • If .tmp/examples-rerun.txt exists and is non-empty, invoking the skill with no args runs rerun by default.

Behavioral validation (Codex/LLM responsibility)

The runner does not perform any automated behavioral validation. After every foreground start or rerun, Codex must manually validate all exit-0 entries:

  1. Read the example source (and comments) to infer intended flow, tools used, and expected key outputs.
  2. Open the matching per-example log under .tmp/examples-start-logs/.
  3. Confirm the intended actions/results occurred; flag omissions or divergences.
  4. Do this for all passed examples, not just a sample.
  5. Report immediately after the run with concise citations to the exact log lines that justify the validation.

skill-installer

openai

Install Codex skills into $CODEX_HOME/skills from a curated list or a GitHub repo path. Use when a user asks to list installable skills, install a curated skill, or install a skill from another repo (including private repos).

10719

figma

openai

Use the Figma MCP server to fetch design context, screenshots, variables, and assets from Figma, and to translate Figma nodes into production code. Trigger when a task involves Figma URLs, node IDs, design-to-code implementation, or Figma MCP setup and troubleshooting.

349

transcribe

openai

Transcribe audio files to text with optional diarization and known-speaker hints. Use when a user asks to transcribe speech from audio/video, extract text from recordings, or label speakers in interviews or meetings.

403

figma-implement-design

openai

Translate Figma nodes into production-ready code with 1:1 visual fidelity using the Figma MCP workflow (design context, screenshots, assets, and project-convention translation). Trigger when the user provides Figma URLs or node IDs, or asks to implement designs or components that must match Figma specs. Requires a working Figma MCP server connection.

383

doc

openai

Use when the task involves reading, creating, or editing `.docx` documents, especially when formatting or layout fidelity matters; prefer `python-docx` plus the bundled `scripts/render_docx.py` for visual checks.

142

notion-meeting-intelligence

openai

Prepare meeting materials with Notion context and Codex research; use when gathering context, drafting agendas/pre-reads, and tailoring materials to attendees.

431

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.