add-install-docker-ci-e2e
Adds install command in install script, Docker build stage in Dockerfile, and CI jobs for docker build, install script, and embodied e2e test when introducing a new model or environment in RLinf. Use when adding a new embodied model (e.g. dexbotic), new env (e.g. maniskill_libero), or new model+env combination that should be installable, dockerized, and tested in CI.
Install
mkdir -p .claude/skills/add-install-docker-ci-e2e && curl -L -o skill.zip "https://mcp.directory/api/skills/download/3336" && unzip -o skill.zip -d .claude/skills/add-install-docker-ci-e2e && rm skill.zipInstalls to .claude/skills/add-install-docker-ci-e2e
About this skill
Add Install, Docker Build, and CI for a New Model or Environment
Use this skill when adding a new model or new environment (or combination) to RLinf so that: (1) users can install it via requirements/install.sh, (2) a Docker image can be built for it (optional), (3) CI runs install, Docker build, and an end-to-end test.
1. Install script (requirements/install.sh)
-
Register model or env
- New model: add to
SUPPORTED_MODELS(e.g."dexbotic"). - New environment: add to
SUPPORTED_ENVS(e.g."maniskill_libero").
- New model: add to
-
Implement install logic
- New model: add
install_<model>_model()that switches onENV_NAMEand for each supported env: create venv, install common embodied deps, env-specific deps, and the model. Call it from the maincase "$MODEL"(add a newmodel_name)branch that runsinstall_<model>_model). - New env only (no new model): either add a new env branch inside an existing
install_*_model()or addinstall_<env>_env()and call it from the relevant model installers. If the env is used byinstall_env_only, add a branch ininstall_env_onlyfor that env.
- New model: add
-
Help text
print_helpshowsSUPPORTED_MODELSandSUPPORTED_ENVS; no change needed if you only added to those arrays.
See reference.md for exact variable names and code patterns.
2. Dockerfile (docker/Dockerfile)
-
Base image
If the combo needs a different base (e.g. Ubuntu 20 for ROS/Franka), add:FROM <base> AS base-image-embodied-<target>
Otherwise reuse:FROM nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04 AS base-image-embodied-<target>. -
Build stage
Add a stage:FROM embodied-common-image AS embodied-<target>-image- Single RUN for all installs: If the image installs multiple envs (multiple model+env or venvs), chain every
install.shcall in oneRUNwith&&. Splitting installs across multipleRUNlayers breaks uv’s hardlink mode (UV_LINK_MODE=hardlink), because the cache from the previous layer is not in the same layer for hardlinking. Example:RUN bash requirements/install.sh embodied --venv openvla --model openvla --env maniskill_libero && \thenbash requirements/install.sh embodied --venv openpi --model openpi --env maniskill_libero. - Any asset download/link in the same or a following RUN; then
RUN echo "source \${UV_PATH}/<venv>/bin/activate" >> ~/.bashrcfor default env.
-
Final stage
The last stage isFROM ${BUILD_TARGET}-image AS final-image. ValidBUILD_TARGETvalues are those that have a matching*-imagestage (e.g.reason,embodied-maniskill_libero,embodied-dexbotic-maniskill_libero). Adding a new stage makes the new target valid; no change to the final stage line.
Naming: BUILD_TARGET is typically embodied-<env> (e.g. embodied-maniskill_libero) or embodied-<env>-<model> when one image combines multiple models (e.g. behavior-openvlaoft). Match the pattern used by existing stages.
3. CI: Docker build (.github/workflows/docker-build.yml)
Add a job that builds the new image:
- Job id:
build-embodied-<target>(same<target>as in Dockerfile stage name, e.g.build-embodied-maniskill_libero). - Reuse the same steps as existing jobs: maximize storage, checkout, setup Docker Buildx, then build with
BUILD_TARGET=embodied-<target>,NO_MIRROR=true,outputs: type=cacheonly, and a tag likerlinf:embodied-<target>.
Copy an existing build-embodied-* job and replace the target name. See reference.md.
4. CI: Install script (.github/workflows/install.yml)
Add an “Install <model>-<env>” step (or “Install <model>” with one or more envs) in the build job:
pip install uv(anduv cache prune --ciif desired).bash requirements/install.sh embodied --model <model> --env <env>(addTEST_BUILD=1only if the install script is designed to support it for that target).rm -rf .venvbefore the next install.
For multiple envs for the same model, use multiple install.sh calls, each followed by rm -rf .venv. For special runners (e.g. Franka on Ubuntu 20.04), follow the existing build-franka pattern (container image, env vars, loop over versions if any).
5. CI: Embodied e2e test (.github/workflows/embodied-e2e-tests.yml)
-
Test config
Add a YAML config undertests/e2e_tests/embodied/(e.g.<env>_<algo>_<model>.yaml). The e2e runner istrain_embodied_agent.pywith--config-name <name>; the config name is the filename without.yaml. -
Workflow job
Add a job (e.g.embodied-<model>-<env>-test):- Checkout.
- Create embodied environment: set
UV_*, any required path env vars (e.g.LIBERO_PATH,GR00T_PATH), thenbash requirements/install.sh embodied --model <model> --env <env>. - Run test:
source .venv/bin/activate, setREPO_PATH, thenbash tests/e2e_tests/embodied/run.sh <config_name>(orrun_async.shif the test is async). Use a reasonabletimeout-minutes. - Clean up:
rm -rf .venv,uv cache prune, and any test-specific cleanup.
Use runs-on: embodied so the job runs on a runner with GPU/datasets. See existing jobs in the file for env vars and step order.
Checklist
- Install script: Model in
SUPPORTED_MODELSand/or env inSUPPORTED_ENVS;install_*function andcase "$MODEL"(or env) updated. - Dockerfile:
base-image-embodied-<target>if needed;embodied-<target>-imagestage withinstall.shand default venv. If multiple envs: all install.sh calls chained in one RUN (for uv hardlink). - docker-build.yml: New job
build-embodied-<target>withBUILD_TARGET=embodied-<target>. - install.yml: New install step(s) for the new model/env.
- E2e: Config YAML in
tests/e2e_tests/embodied/; new job inembodied-e2e-tests.yml(install env, runrun.sh <config_name>, clean up).
More by RLinf
View all skills by RLinf →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversControl TouchDesigner nodes and properties with natural language for audio reactive installations and interactive digita
Create modern React UI components instantly with Magic AI Agent. Integrates with top IDEs for fast, stunning design and
Access shadcn/ui v4 components, blocks, and demos for rapid React UI library development. Seamless integration and sourc
Unlock seamless Salesforce org management with the secure, flexible Salesforce DX MCP Server. Streamline workflows and b
Use CLI to execute system commands and scripts directly on your host using a powerful cli command line interface. Ideal
MCP Science: Easily discover and run scientific research MCP servers from the Path Integral Institute with automated set
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.