create-environments
Create or migrate verifiers environments for the Prime Lab ecosystem. Use when asked to build a new environment from scratch, port an eval or benchmark from papers or other libraries, start from an environment on the Hub, or convert existing tasks into a package that exposes load_environment and installs cleanly with prime env install.
Install
mkdir -p .claude/skills/create-environments && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7578" && unzip -o skill.zip -d .claude/skills/create-environments && rm skill.zipInstalls to .claude/skills/create-environments
About this skill
Create Environments
Goal
Build production-quality verifiers environments that work immediately in the Prime ecosystem: install, load, evaluate, and train without hidden setup.
Start With Ecosystem Paths
- Prefer ecosystem-native setup before custom scaffolding.
- Use this default loop:
prime env init my-env
prime env install my-env
prime eval run my-env -m gpt-4.1-mini -n 5
- Treat
prime eval runas the canonical eval path. It saves results automatically, so do not add--skip-uploadunless the user explicitly requests that deviation. - Prefer an existing environment as a starting point when possible:
prime env list --search "keyword"
prime env info owner/name
prime env install owner/name
- For repository examples, use repo install when available:
prime env install math-python --from-repo
- Encourage users to keep endpoint aliases in
configs/endpoints.tomlso smoke tests can switch models quickly. - Ask users whether they want instruct or reasoning models for validation.
- Instruct-first smoke choices:
gpt-4.1series,qwen3instruct series. - Reasoning validation choices:
gpt-5series,qwen3thinking series,glmseries.
Build Modes
1. Build From Scratch
- Define task contract first: prompt shape, allowed tools, stop conditions, rubric outputs, metrics.
- Select the smallest correct base class:
SingleTurnEnvfor one-response tasks.MultiTurnEnvfor custom interaction loops.ToolEnvorMCPEnvfor stateless tools.StatefulToolEnvfor per-rollout resources.CliAgentEnvfor running agent binaries in sandboxes with API interception. Overrideget_sandbox_resources(state)for per-instance resources,build_env_vars(state)for custom env vars.ComposableEnv(withTaskSet/SandboxTaskSet+Harness) for separating what to solve from how to solve it. Define aTaskSet(dataset, instructions, sandbox spec, rubric) and aHarness(install script, run command, system prompt), wire them together with zero subclassing. UseSandboxTaskSetwhen tasks need sandboxes with per-instance images/resources.
- Implement
load_environment(...) -> vf.Environmentwith explicit arguments. - Add
pyproject.tomldefaults in[tool.verifiers.eval]only when stable.
2. Port From Another Library, Project, or Paper
- Create a strict source-to-target mapping before coding:
- dataset rows and splits
- prompt rendering and role ordering
- tool I/O schema and stop logic
- scoring math and aggregation
- pass/fail thresholds and special cases
- Preserve one-to-one logical equivalence for what the model sees and what gets scored.
- Never invent unresolved formatting decisions. Ask the user to decide explicitly.
- Benchmark runtime and remove avoidable bottlenecks before handoff.
3. Start From Hub Environment
- Install or pull the closest baseline:
prime env install owner/name
prime env pull owner/name -t ./tmp-env
- Keep proven interfaces stable unless a migration is deliberate and explicit.
- Re-run smoke evals after each major change.
Non-Negotiable Quality Rules
- Use deterministic, well-defined reward checks or LLM judges.
- Avoid best-effort deterministic heuristics such as keyword style checks except as an explicit last resort with user sign-off.
- Make environments self-contained after install. Do not require users to run background servers before
load_environment(). - Manage external resources inside the environment lifecycle.
- Validate required secrets in
load_environment()viavf.ensure_keys(...). - Surface feature limits directly. Do not ship hacky workarounds without explicit user approval.
Verification Gate
Run these before claiming completion:
prime env install my-env
prime eval run my-env -m gpt-4.1-mini -n 5
prime eval run my-env -m gpt-4.1-mini -n 50 -r 1 -s
If multi-turn or tool-heavy, also run with higher rollouts:
prime eval run my-env -m gpt-4.1-mini -n 30 -r 3 -s
Publish Gate Before Large Evals Or Training
- After smoke tests pass and behavior is stable, recommend pushing to Hub before large evals or RL training.
- Ask the user explicitly whether visibility should be
PUBLICorPRIVATE. - Use:
prime env push my-env --visibility PUBLIC
or
prime env push my-env --visibility PRIVATE
- For hosted or large-scale workflows, prefer running with the Hub slug after push:
prime eval run owner/my-env -m gpt-4.1-mini -n 200 -r 3 -s
Synthetic Data
- Ask users for preferences on which LLMs to use for synthetic data generation and curation before implementation.
- Prefer generating synthetic data from raw source documents whenever possible instead of relying only on hand-authored prompts.
- Use LLM orchestration (planner/generator/validator loops) to improve sample quality and diversity.
- Use back-translation: start from complete materials and decompose them into incomplete tasks, criteria, or partial artifacts that the model must reconstruct.
- Use fan-out subtopic sampling from LLMs to expand coverage and avoid overfitting to a narrow slice of the domain.
Deliverable Format
Report:
- Environment ID and path.
- Exact install and eval commands used.
- Port-equivalence notes if migrated.
- Any unresolved user decisions that block strict fidelity.
More by PrimeIntellect-ai
View all skills by PrimeIntellect-ai →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversEffortlessly create 25+ chart types with MCP Server Chart. Visualize complex datasets using TypeScript and AntV for powe
Integrate with HackMD API for secure document management in collaborative markdown environments. Create, update, and sha
Interact with SingleStore databases using natural language to run SQL queries, manage workspaces, create environments, a
DevCycle integrates with project management tools and software to manage feature flags, tasks, and deployments securely
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.