stable-baselines3
Production-ready reinforcement learning algorithms (PPO, SAC, DQN, TD3, DDPG, A2C) with scikit-learn-like API. Use for standard RL experiments, quick prototyping, and well-documented algorithm implementations. Best for single-agent RL with Gymnasium environments. For high-performance parallel training, multi-agent systems, or custom vectorized environments, use pufferlib instead.
Install
mkdir -p .claude/skills/stable-baselines3 && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2625" && unzip -o skill.zip -d .claude/skills/stable-baselines3 && rm skill.zipInstalls to .claude/skills/stable-baselines3
About this skill
Stable Baselines3
Overview
Stable Baselines3 (SB3) is a PyTorch-based library providing reliable implementations of reinforcement learning algorithms. This skill provides comprehensive guidance for training RL agents, creating custom environments, implementing callbacks, and optimizing training workflows using SB3's unified API.
Core Capabilities
1. Training RL Agents
Basic Training Pattern:
import gymnasium as gym
from stable_baselines3 import PPO
# Create environment
env = gym.make("CartPole-v1")
# Initialize agent
model = PPO("MlpPolicy", env, verbose=1)
# Train the agent
model.learn(total_timesteps=10000)
# Save the model
model.save("ppo_cartpole")
# Load the model (without prior instantiation)
model = PPO.load("ppo_cartpole", env=env)
Important Notes:
total_timestepsis a lower bound; actual training may exceed this due to batch collection- Use
model.load()as a static method, not on an existing instance - The replay buffer is NOT saved with the model to save space
Algorithm Selection:
Use references/algorithms.md for detailed algorithm characteristics and selection guidance. Quick reference:
- PPO/A2C: General-purpose, supports all action space types, good for multiprocessing
- SAC/TD3: Continuous control, off-policy, sample-efficient
- DQN: Discrete actions, off-policy
- HER: Goal-conditioned tasks
See scripts/train_rl_agent.py for a complete training template with best practices.
2. Custom Environments
Requirements:
Custom environments must inherit from gymnasium.Env and implement:
__init__(): Define action_space and observation_spacereset(seed, options): Return initial observation and info dictstep(action): Return observation, reward, terminated, truncated, inforender(): Visualization (optional)close(): Cleanup resources
Key Constraints:
- Image observations must be
np.uint8in range [0, 255] - Use channel-first format when possible (channels, height, width)
- SB3 normalizes images automatically by dividing by 255
- Set
normalize_images=Falsein policy_kwargs if pre-normalized - SB3 does NOT support
DiscreteorMultiDiscretespaces withstart!=0
Validation:
from stable_baselines3.common.env_checker import check_env
check_env(env, warn=True)
See scripts/custom_env_template.py for a complete custom environment template and references/custom_environments.md for comprehensive guidance.
3. Vectorized Environments
Purpose: Vectorized environments run multiple environment instances in parallel, accelerating training and enabling certain wrappers (frame-stacking, normalization).
Types:
- DummyVecEnv: Sequential execution on current process (for lightweight environments)
- SubprocVecEnv: Parallel execution across processes (for compute-heavy environments)
Quick Setup:
from stable_baselines3.common.env_util import make_vec_env
# Create 4 parallel environments
env = make_vec_env("CartPole-v1", n_envs=4, vec_env_cls=SubprocVecEnv)
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=25000)
Off-Policy Optimization:
When using multiple environments with off-policy algorithms (SAC, TD3, DQN), set gradient_steps=-1 to perform one gradient update per environment step, balancing wall-clock time and sample efficiency.
API Differences:
reset()returns only observations (info available invec_env.reset_infos)step()returns 4-tuple:(obs, rewards, dones, infos)not 5-tuple- Environments auto-reset after episodes
- Terminal observations available via
infos[env_idx]["terminal_observation"]
See references/vectorized_envs.md for detailed information on wrappers and advanced usage.
4. Callbacks for Monitoring and Control
Purpose: Callbacks enable monitoring metrics, saving checkpoints, implementing early stopping, and custom training logic without modifying core algorithms.
Common Callbacks:
- EvalCallback: Evaluate periodically and save best model
- CheckpointCallback: Save model checkpoints at intervals
- StopTrainingOnRewardThreshold: Stop when target reward reached
- ProgressBarCallback: Display training progress with timing
Custom Callback Structure:
from stable_baselines3.common.callbacks import BaseCallback
class CustomCallback(BaseCallback):
def _on_training_start(self):
# Called before first rollout
pass
def _on_step(self):
# Called after each environment step
# Return False to stop training
return True
def _on_rollout_end(self):
# Called at end of rollout
pass
Available Attributes:
self.model: The RL algorithm instanceself.num_timesteps: Total environment stepsself.training_env: The training environment
Chaining Callbacks:
from stable_baselines3.common.callbacks import CallbackList
callback = CallbackList([eval_callback, checkpoint_callback, custom_callback])
model.learn(total_timesteps=10000, callback=callback)
See references/callbacks.md for comprehensive callback documentation.
5. Model Persistence and Inspection
Saving and Loading:
# Save model
model.save("model_name")
# Save normalization statistics (if using VecNormalize)
vec_env.save("vec_normalize.pkl")
# Load model
model = PPO.load("model_name", env=env)
# Load normalization statistics
vec_env = VecNormalize.load("vec_normalize.pkl", vec_env)
Parameter Access:
# Get parameters
params = model.get_parameters()
# Set parameters
model.set_parameters(params)
# Access PyTorch state dict
state_dict = model.policy.state_dict()
6. Evaluation and Recording
Evaluation:
from stable_baselines3.common.evaluation import evaluate_policy
mean_reward, std_reward = evaluate_policy(
model,
env,
n_eval_episodes=10,
deterministic=True
)
Video Recording:
from stable_baselines3.common.vec_env import VecVideoRecorder
# Wrap environment with video recorder
env = VecVideoRecorder(
env,
"videos/",
record_video_trigger=lambda x: x % 2000 == 0,
video_length=200
)
See scripts/evaluate_agent.py for a complete evaluation and recording template.
7. Advanced Features
Learning Rate Schedules:
def linear_schedule(initial_value):
def func(progress_remaining):
# progress_remaining goes from 1 to 0
return progress_remaining * initial_value
return func
model = PPO("MlpPolicy", env, learning_rate=linear_schedule(0.001))
Multi-Input Policies (Dict Observations):
model = PPO("MultiInputPolicy", env, verbose=1)
Use when observations are dictionaries (e.g., combining images with sensor data).
Hindsight Experience Replay:
from stable_baselines3 import SAC, HerReplayBuffer
model = SAC(
"MultiInputPolicy",
env,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=4,
goal_selection_strategy="future",
),
)
TensorBoard Integration:
model = PPO("MlpPolicy", env, tensorboard_log="./tensorboard/")
model.learn(total_timesteps=10000)
Workflow Guidance
Starting a New RL Project:
- Define the problem: Identify observation space, action space, and reward structure
- Choose algorithm: Use
references/algorithms.mdfor selection guidance - Create/adapt environment: Use
scripts/custom_env_template.pyif needed - Validate environment: Always run
check_env()before training - Set up training: Use
scripts/train_rl_agent.pyas starting template - Add monitoring: Implement callbacks for evaluation and checkpointing
- Optimize performance: Consider vectorized environments for speed
- Evaluate and iterate: Use
scripts/evaluate_agent.pyfor assessment
Common Issues:
- Memory errors: Reduce
buffer_sizefor off-policy algorithms or use fewer parallel environments - Slow training: Consider SubprocVecEnv for parallel environments
- Unstable training: Try different algorithms, tune hyperparameters, or check reward scaling
- Import errors: Ensure
stable_baselines3is installed:uv pip install stable-baselines3[extra]
Resources
scripts/
train_rl_agent.py: Complete training script template with best practicesevaluate_agent.py: Agent evaluation and video recording templatecustom_env_template.py: Custom Gym environment template
references/
algorithms.md: Detailed algorithm comparison and selection guidecustom_environments.md: Comprehensive custom environment creation guidecallbacks.md: Complete callback system referencevectorized_envs.md: Vectorized environment usage and wrappers
Installation
# Basic installation
uv pip install stable-baselines3
# With extra dependencies (Tensorboard, etc.)
uv pip install stable-baselines3[extra]
More by K-Dense-AI
View all skills by K-Dense-AI →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversBoost Postgres performance with Postgres MCP Pro—AI-driven index tuning, health checks, and safe, intelligent SQL optimi
DeepWiki converts deepwiki.com pages into clean Markdown, with fast, secure extraction—perfect as a PDF text, page, or i
In Memoria delivers persistent code analysis via hybrid Rust-TypeScript, using Tree-sitter for abstract syntax tree Pyth
Needle bridges the Needle AI platform with MCP server integration, enabling seamless NLP and ML server integration and a
Access LeetCode problems, user profiles, and community solutions for improved algorithm practice and LeetCode 75 contest
Integrate with Kaggle's API for seamless competition entry, dataset management, kernels, and model submissions for data
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.