debug-distributed
Guide for debugging distributed training issues in AReaL. Use when user encounters hangs, wrong results, OOM, or communication errors.
Install
mkdir -p .claude/skills/debug-distributed && curl -L -o skill.zip "https://mcp.directory/api/skills/download/3466" && unzip -o skill.zip -d .claude/skills/debug-distributed && rm skill.zipInstalls to .claude/skills/debug-distributed
About this skill
Debug Distributed Training
Debugging guide for distributed training issues in AReaL (FSDP2, TP, CP, EP).
When to Use
This skill is triggered when:
- Training hangs or deadlocks
- Results differ across ranks or are numerically wrong
- OOM errors in distributed settings
- NCCL/communication errors or device mesh issues
Debugging Principles
Minimal Reproduction
Always follow the minimal demo principle: Reproduce with the least amount of code to narrow down the issue faster.
# Bad: Debug in full training loop
# Good: Create minimal script
import torch
import torch.distributed as dist
dist.init_process_group("nccl")
rank = dist.get_rank()
# Reproduce the exact operation that fails
tensor = torch.ones(10).cuda()
dist.all_reduce(tensor) # <-- Isolate the failing op
print(f"Rank {rank}: {tensor}")
Reduction strategy:
- Remove unrelated model components
- Use small tensor sizes
- Reduce world_size to minimum (e.g., 2 GPUs)
- Remove torch.compile if possible
- Disable activation checkpointing
Step-by-Step Debugging Guide
1. Hang Debugging (Deadlocks, Synchronization)
Environment Variables for Debugging:
# Full debug logging
export TORCH_DISTRIBUTED_DEBUG=DETAIL
export NCCL_DEBUG=INFO
export NCCL_DEBUG_SUBSYS=ALL
# torch.compile debugging
export TORCH_LOGS="+dynamo,recompiles"
export TORCHDYNAMO_VERBOSE=1
Dump Call Stack with py-spy (for hung processes):
# Find process IDs
ps aux | grep python
# Dump call stack of specific rank
py-spy dump --pid <PID>
# Record flame graph for performance analysis
py-spy record -o profile.svg --pid <PID> --duration 30
Common Causes:
- Mismatched Collectives: One rank calls
all_reduce, another doesn't. - Wrong Process Group: Using wrong group for collective.
- Tensor Shape Mismatch: Different shapes across ranks.
Debug Steps:
# Verify group membership
mesh = parallel_dims.get_mesh("dp_shard_cp")
group = mesh.get_group()
print(f"Rank {dist.get_rank()}: group size = {dist.get_world_size(group)}")
# Print shapes on all ranks
print(f"Rank {dist.get_rank()}: tensor.shape = {tensor.shape}")
dist.barrier()
Timeout Adjustment (for debugging only):
from areal.engine.core.distributed import patch_dist_group_timeout
from datetime import timedelta
patch_dist_group_timeout(timedelta(minutes=30))
2. Wrong Results (Gradient, Reduction Issues)
Check DTensor Placements:
from torch.distributed.tensor import DTensor
if isinstance(param, DTensor):
print(f"Param {name}: placements={param.placements}, mesh={param.device_mesh}")
Verify Gradient Reduction:
for name, param in model.named_parameters():
if param.grad is not None:
print(f"Rank {dist.get_rank()}: {name} grad_sum = {param.grad.sum().item()}")
3. OOM Issues (Memory, Sharding)
Check Memory Usage:
print(f"Rank {dist.get_rank()}: "
f"allocated={torch.cuda.memory_allocated()/1e9:.2f}GB, "
f"reserved={torch.cuda.memory_reserved()/1e9:.2f}GB")
Check FSDP Coverage:
for name, param in model.named_parameters():
is_dtensor = isinstance(param, DTensor)
print(f"{name}: is_dtensor={is_dtensor}, shape={param.shape}")
4. Communication Errors
| Error | Cause | Solution |
|---|---|---|
NCCL WARN Cuda failure | GPU communication | Check NCCL version, GPU topology |
RuntimeError: Timed out | Rank synchronization | Increase timeout, check code paths |
Invalid device mesh | Mesh configuration | Verify world_size = dp * tp * cp |
Debugging Tools
Environment Variables Reference
| Variable | Purpose |
|---|---|
TORCH_DISTRIBUTED_DEBUG=DETAIL | Detailed distributed logging |
NCCL_DEBUG=INFO | NCCL communication logging |
NCCL_DEBUG_SUBSYS=ALL | All NCCL subsystems |
TORCH_LOGS="+dynamo,recompiles" | torch.compile logging |
TORCHDYNAMO_VERBOSE=1 | Dynamo verbose output |
CUDA_LAUNCH_BLOCKING=1 | Synchronous CUDA (slow, for debugging) |
py-spy for Call Stack Analysis
# Install
pip install py-spy
# Dump call stack of hung process
py-spy dump --pid <PID>
# Dump all Python processes
pgrep -f python | xargs -I {} py-spy dump --pid {}
# Record flame graph
py-spy record -o profile.svg --pid <PID> --duration 30
Rank-Conditional Printing
def print_all_ranks(msg):
for r in range(dist.get_world_size()):
if dist.get_rank() == r:
print(f"[Rank {r}] {msg}")
dist.barrier()
Check Device Mesh
def debug_mesh(parallel_dims):
mesh = parallel_dims.world_mesh
for dim_name in mesh.mesh_dim_names:
submesh = parallel_dims.get_mesh(dim_name)
if submesh:
print(f"Rank {dist.get_rank()}: {dim_name} size={submesh.size()}")
Validate Tensor Consistency
def check_tensor_consistency(tensor, name, group=None):
local_sum = tensor.sum().item()
tensor_sums = [None] * dist.get_world_size(group)
dist.all_gather_object(tensor_sums, local_sum, group=group)
if dist.get_rank() == 0 and len(set(tensor_sums)) > 1:
print(f"WARNING: {name} inconsistent: {tensor_sums}")
Key Files Reference
| Component | File |
|---|---|
| Parallel Dims | areal/experimental/models/archon/parallel_dims.py |
| Expert Parallel | areal/experimental/models/archon/expert_parallel.py |
| Ulysses (CP) | areal/experimental/models/archon/ulysses.py |
| FSDP/TP Apply | areal/experimental/models/archon/qwen2/infra/parallelize.py |
<!-- ================================================================================ MAINTAINER GUIDE ================================================================================ Location: .claude/skills/debug-distributed/SKILL.md Invocation: /debug-distributed ## Purpose Debugging guide for distributed training issues. Covers FSDP2, Tensor Parallelism, Context Parallelism, and Expert Parallelism. ## How to Update ### When Adding New Parallelism Features 1. Add section for the parallelism type 2. Document common error patterns and debugging snippets ### When PyTorch Distributed APIs Change 1. Update DTensor/DeviceMesh examples 2. Update environment variable references ### When New Error Patterns Emerge 1. Add to "Common Errors and Solutions" table 2. Reference relevant source files ================================================================================ -->
More by inclusionAI
View all skills by inclusionAI →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversTest apps seamlessly with BrowserStack's testing infrastructure. Verify mobile app functionality, cross-browser issues,
Get expert React Native software guidance with tools for component analysis, performance, debugging, and migration betwe
Capture screenshots across macOS, Windows, and Linux using advanced screen capture and recording software for analysis,
AI-driven control of live Chrome via Chrome DevTools: browser automation, debugging, performance analysis and network mo
Use Chrome DevTools for web site test speed, debugging, and performance analysis. The essential chrome developer tools f
Extend your developer tools with GitHub MCP Server for advanced automation, supporting GitHub Student and student packag
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.