create-environments

0
0
Source

Create or migrate verifiers environments for the Prime Lab ecosystem. Use when asked to build a new environment from scratch, port an eval or benchmark from papers or other libraries, start from an environment on the Hub, or convert existing tasks into a package that exposes load_environment and installs cleanly with prime env install.

Install

mkdir -p .claude/skills/create-environments && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7578" && unzip -o skill.zip -d .claude/skills/create-environments && rm skill.zip

Installs to .claude/skills/create-environments

About this skill

Create Environments

Goal

Build production-quality verifiers environments that work immediately in the Prime ecosystem: install, load, evaluate, and train without hidden setup.

Start With Ecosystem Paths

  1. Prefer ecosystem-native setup before custom scaffolding.
  2. Use this default loop:
prime env init my-env
prime env install my-env
prime eval run my-env -m gpt-4.1-mini -n 5
  1. Treat prime eval run as the canonical eval path. It saves results automatically, so do not add --skip-upload unless the user explicitly requests that deviation.
  2. Prefer an existing environment as a starting point when possible:
prime env list --search "keyword"
prime env info owner/name
prime env install owner/name
  1. For repository examples, use repo install when available:
prime env install math-python --from-repo
  1. Encourage users to keep endpoint aliases in configs/endpoints.toml so smoke tests can switch models quickly.
  2. Ask users whether they want instruct or reasoning models for validation.
  3. Instruct-first smoke choices: gpt-4.1 series, qwen3 instruct series.
  4. Reasoning validation choices: gpt-5 series, qwen3 thinking series, glm series.

Build Modes

1. Build From Scratch

  1. Define task contract first: prompt shape, allowed tools, stop conditions, rubric outputs, metrics.
  2. Select the smallest correct base class:
  • SingleTurnEnv for one-response tasks.
  • MultiTurnEnv for custom interaction loops.
  • ToolEnv or MCPEnv for stateless tools.
  • StatefulToolEnv for per-rollout resources.
  • CliAgentEnv for running agent binaries in sandboxes with API interception. Override get_sandbox_resources(state) for per-instance resources, build_env_vars(state) for custom env vars.
  • ComposableEnv (with TaskSet/SandboxTaskSet + Harness) for separating what to solve from how to solve it. Define a TaskSet (dataset, instructions, sandbox spec, rubric) and a Harness (install script, run command, system prompt), wire them together with zero subclassing. Use SandboxTaskSet when tasks need sandboxes with per-instance images/resources.
  1. Implement load_environment(...) -> vf.Environment with explicit arguments.
  2. Add pyproject.toml defaults in [tool.verifiers.eval] only when stable.

2. Port From Another Library, Project, or Paper

  1. Create a strict source-to-target mapping before coding:
  • dataset rows and splits
  • prompt rendering and role ordering
  • tool I/O schema and stop logic
  • scoring math and aggregation
  • pass/fail thresholds and special cases
  1. Preserve one-to-one logical equivalence for what the model sees and what gets scored.
  2. Never invent unresolved formatting decisions. Ask the user to decide explicitly.
  3. Benchmark runtime and remove avoidable bottlenecks before handoff.

3. Start From Hub Environment

  1. Install or pull the closest baseline:
prime env install owner/name
prime env pull owner/name -t ./tmp-env
  1. Keep proven interfaces stable unless a migration is deliberate and explicit.
  2. Re-run smoke evals after each major change.

Non-Negotiable Quality Rules

  1. Use deterministic, well-defined reward checks or LLM judges.
  2. Avoid best-effort deterministic heuristics such as keyword style checks except as an explicit last resort with user sign-off.
  3. Make environments self-contained after install. Do not require users to run background servers before load_environment().
  4. Manage external resources inside the environment lifecycle.
  5. Validate required secrets in load_environment() via vf.ensure_keys(...).
  6. Surface feature limits directly. Do not ship hacky workarounds without explicit user approval.

Verification Gate

Run these before claiming completion:

prime env install my-env
prime eval run my-env -m gpt-4.1-mini -n 5
prime eval run my-env -m gpt-4.1-mini -n 50 -r 1 -s

If multi-turn or tool-heavy, also run with higher rollouts:

prime eval run my-env -m gpt-4.1-mini -n 30 -r 3 -s

Publish Gate Before Large Evals Or Training

  1. After smoke tests pass and behavior is stable, recommend pushing to Hub before large evals or RL training.
  2. Ask the user explicitly whether visibility should be PUBLIC or PRIVATE.
  3. Use:
prime env push my-env --visibility PUBLIC

or

prime env push my-env --visibility PRIVATE
  1. For hosted or large-scale workflows, prefer running with the Hub slug after push:
prime eval run owner/my-env -m gpt-4.1-mini -n 200 -r 3 -s

Synthetic Data

  1. Ask users for preferences on which LLMs to use for synthetic data generation and curation before implementation.
  2. Prefer generating synthetic data from raw source documents whenever possible instead of relying only on hand-authored prompts.
  3. Use LLM orchestration (planner/generator/validator loops) to improve sample quality and diversity.
  4. Use back-translation: start from complete materials and decompose them into incomplete tasks, criteria, or partial artifacts that the model must reconstruct.
  5. Use fan-out subtopic sampling from LLMs to expand coverage and avoid overfitting to a narrow slice of the domain.

Deliverable Format

Report:

  1. Environment ID and path.
  2. Exact install and eval commands used.
  3. Port-equivalence notes if migrated.
  4. Any unresolved user decisions that block strict fidelity.

inference-server

PrimeIntellect-ai

Start and test the prime-rl inference server. Use when asked to run inference, start vLLM, test a model, or launch the inference server.

00

toml-config

PrimeIntellect-ai

How to write and use TOML configs in prime-rl. Use when creating config files, running commands with configs, or overriding config values via CLI.

10

optimize-with-environments

PrimeIntellect-ai

Optimize environment system prompts with GEPA through prime gepa run. Use when asked to improve prompt performance without gradient training, compare baseline versus optimized prompts, run GEPA from CLI or TOML configs, or interpret GEPA outputs before deployment.

10

evaluate-environments

PrimeIntellect-ai

Run and analyze evaluations for verifiers environments using prime eval. Use when asked to smoke-test environments, run benchmark sweeps, resume interrupted evaluations, compare models, inspect sample-level outputs, or produce evaluation summaries suitable for deciding next steps.

00

browse-environments

PrimeIntellect-ai

Discover and inspect verifiers environments through the Prime ecosystem. Use when asked to find environments on the Hub, compare options, inspect metadata, check action status, pull local copies for inspection, or choose environment starting points before evaluation, training, or migration work.

00

review-environments

PrimeIntellect-ai

Review verifiers environments for correctness, robustness, and ecosystem compatibility. Use when asked for environment code review, quality audit, migration validation, or release readiness checks for local environments or environments pulled from the Hub.

00

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

9521,094

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

846846

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

571699

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

548492

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

673466

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

514280

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.