aoti-debug

1
0
Source

Debug AOTInductor (AOTI) errors and crashes. Use when encountering AOTI segfaults, device mismatch errors, constant loading failures, or runtime errors from aot_compile, aot_load, aoti_compile_and_package, or aoti_load_package.

Install

mkdir -p .claude/skills/aoti-debug && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4024" && unzip -o skill.zip -d .claude/skills/aoti-debug && rm skill.zip

Installs to .claude/skills/aoti-debug

About this skill

AOTI Debugging Guide

This skill helps diagnose and fix common AOTInductor issues.

Error Pattern Routing

Check the error message and route to the appropriate sub-guide:

Triton Index Out of Bounds

If the error matches this pattern:

Assertion `index out of bounds: 0 <= tmpN < ksM` failed

→ Follow the guide in triton-index-out-of-bounds.md

All Other Errors

Continue with the sections below.


First Step: Always Check Device and Shape Matching

For ANY AOTI error (segfault, exception, crash, wrong output), ALWAYS check these first:

  1. Compile device == Load device: The model must be loaded on the same device type it was compiled on
  2. Input devices match: Runtime inputs must be on the same device as the compiled model
  3. Input shapes match: Runtime input shapes must match the shapes used during compilation (or satisfy dynamic shape constraints)
# During compilation - note the device and shapes
model = MyModel().eval()           # What device? CPU or .cuda()?
inp = torch.randn(2, 10)           # What device? What shape?
compiled_so = torch._inductor.aot_compile(model, (inp,))

# During loading - device type MUST match compilation
loaded = torch._export.aot_load(compiled_so, "???")  # Must match model/input device above

# During inference - device and shapes MUST match
out = loaded(inp.to("???"))  # Must match compile device, shape must match

If any of these don't match, you will get errors ranging from segfaults to exceptions to wrong outputs.

Key Constraint: Device Type Matching

AOTI requires compile and load to use the same device type.

  • If you compile on CUDA, you must load on CUDA (device index can differ)
  • If you compile on CPU, you must load on CPU
  • Cross-device loading (e.g., compile on GPU, load on CPU) is NOT supported

Common Error Patterns

1. Device Mismatch Segfault

Symptom: Segfault, exception, or crash during aot_load() or model execution.

Example error messages:

  • The specified pointer resides on host memory and is not registered with any CUDA device
  • Crash during constant loading in AOTInductorModelBase
  • Expected out tensor to have device cuda:0, but got cpu instead

Cause: Compile and load device types don't match (see "First Step" above).

Solution: Ensure compile and load use the same device type. If compiled on CPU, load on CPU. If compiled on CUDA, load on CUDA.

2. Input Device Mismatch at Runtime

Symptom: RuntimeError during model execution.

Cause: Input device doesn't match compile device (see "First Step" above).

Better Debugging: Run with AOTI_RUNTIME_CHECK_INPUTS=1 for clearer errors. This flag validates all input properties including device type, dtype, sizes, and strides:

AOTI_RUNTIME_CHECK_INPUTS=1 python your_script.py

This produces actionable error messages like:

Error: input_handles[0]: unmatched device type, expected: 0(cpu), but got: 1(cuda)

Debugging CUDA Illegal Memory Access (IMA) Errors

If you encounter CUDA illegal memory access errors, follow this systematic approach:

Step 1: Sanity Checks

Before diving deep, try these debugging flags:

AOTI_RUNTIME_CHECK_INPUTS=1
TORCHINDUCTOR_NAN_ASSERTS=1

These flags take effect at compilation time (at codegen time):

  • AOTI_RUNTIME_CHECK_INPUTS=1 checks if inputs satisfy the same guards used during compilation
  • TORCHINDUCTOR_NAN_ASSERTS=1 adds codegen before and after each kernel to check for NaN

Step 2: Pinpoint the CUDA IMA

CUDA IMA errors can be non-deterministic. Use these flags to trigger the error deterministically:

PYTORCH_NO_CUDA_MEMORY_CACHING=1
CUDA_LAUNCH_BLOCKING=1

These flags take effect at runtime:

  • PYTORCH_NO_CUDA_MEMORY_CACHING=1 disables PyTorch's Caching Allocator, which allocates bigger buffers than needed immediately. This is usually why CUDA IMA errors are non-deterministic.
  • CUDA_LAUNCH_BLOCKING=1 forces kernels to launch one at a time. Without this, you get "CUDA kernel errors might be asynchronously reported" warnings since kernels launch asynchronously.

Step 3: Identify Problematic Kernels with Intermediate Value Debugger

Use the AOTI Intermediate Value Debugger to pinpoint the problematic kernel:

AOT_INDUCTOR_DEBUG_INTERMEDIATE_VALUE_PRINTER=3

This prints kernels one by one at runtime. Together with previous flags, this shows which kernel was launched right before the error.

To inspect inputs to a specific kernel:

AOT_INDUCTOR_FILTERED_KERNELS_TO_PRINT="triton_poi_fused_add_ge_logical_and_logical_or_lt_231,_add_position_embeddings_kernel_5" AOT_INDUCTOR_DEBUG_INTERMEDIATE_VALUE_PRINTER=2

If inputs to the kernel are unexpected, inspect the kernel that produces the bad input.

Additional Debugging Tools

Logging and Tracing

  • tlparse / TORCH_TRACE: Provides complete output codes and records guards used
  • TORCH_LOGS: Use TORCH_LOGS="+inductor,output_code" to see more PT2 internal logs
  • TORCH_SHOW_CPP_STACKTRACES: Set to 1 to see more stack traces

Common Sources of Issues

  • Dynamic shapes: Historically a source of many IMAs. Pay special attention when debugging dynamic shape scenarios.
  • Custom ops: Especially when implemented in C++ with dynamic shapes. The meta function may need to be Symint'ified.

API Notes

Deprecated API

torch._export.aot_compile()  # Deprecated
torch._export.aot_load()     # Deprecated

Current API

torch._inductor.aoti_compile_and_package()
torch._inductor.aoti_load_package()

The new API stores device metadata in the package, so aoti_load_package() automatically uses the correct device type. You can only change the device index (e.g., cuda:0 vs cuda:1), not the device type.

Environment Variables Summary

VariableWhenPurpose
AOTI_RUNTIME_CHECK_INPUTS=1Compile timeValidate inputs match compilation guards
TORCHINDUCTOR_NAN_ASSERTS=1Compile timeCheck for NaN before/after kernels
PYTORCH_NO_CUDA_MEMORY_CACHING=1RuntimeMake IMA errors deterministic
CUDA_LAUNCH_BLOCKING=1RuntimeForce synchronous kernel launches
AOT_INDUCTOR_DEBUG_INTERMEDIATE_VALUE_PRINTER=3Compile timePrint kernels at runtime
TORCH_LOGS="+inductor,output_code"RuntimeSee PT2 internal logs
TORCH_SHOW_CPP_STACKTRACES=1RuntimeShow C++ stack traces

skill-writer

pytorch

Guide users through creating Agent Skills for Claude Code. Use when the user wants to create, write, author, or design a new Skill, or needs help with SKILL.md files, frontmatter, or skill structure.

1048

add-uint-support

pytorch

Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros. Use when adding support for uint16, uint32, uint64 types to operators, kernels, or when user mentions enabling unsigned types, barebones unsigned types, or uint support.

785

metal-kernel

pytorch

Write Metal/MPS kernels for PyTorch operators. Use when adding MPS device support to operators, implementing Metal shaders, or porting CUDA kernels to Apple Silicon. Covers native_functions.yaml dispatch, host-side operators, and Metal kernel implementation.

121

triaging-issues

pytorch

Triages GitHub issues by routing to oncall teams, applying labels, and closing questions. Use when processing new PyTorch issues or when asked to triage an issue.

300

pr-review

pytorch

Review PyTorch pull requests for code quality, test coverage, security, and backward compatibility. Use when reviewing PRs, when asked to review code changes, or when the user mentions "review PR", "code review", or "check this PR".

200

at-dispatch-v2

pytorch

Convert PyTorch AT_DISPATCH macros to AT_DISPATCH_V2 format in ATen C++ code. Use when porting AT_DISPATCH_ALL_TYPES_AND*, AT_DISPATCH_FLOATING_TYPES*, or other dispatch macros to the new v2 API. For ATen kernel files, CUDA kernels, and native operator implementations.

840

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318399

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

340397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

452339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.