langchain-incident-runbook
Incident response procedures for LangChain production issues. Use when responding to production incidents, diagnosing outages, or implementing emergency procedures for LLM applications. Trigger with phrases like "langchain incident", "langchain outage", "langchain production issue", "langchain emergency", "langchain down".
Install
mkdir -p .claude/skills/langchain-incident-runbook && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4221" && unzip -o skill.zip -d .claude/skills/langchain-incident-runbook && rm skill.zipInstalls to .claude/skills/langchain-incident-runbook
About this skill
LangChain Incident Runbook
Overview
Standard operating procedures for LangChain production incidents: provider outages, error rate spikes, latency degradation, memory issues, and cost overruns.
Severity Classification
| Level | Description | Response Time | Example |
|---|---|---|---|
| SEV1 | Complete outage | 15 min | All LLM calls failing |
| SEV2 | Major degradation | 30 min | >50% error rate, >10s latency |
| SEV3 | Minor degradation | 2 hours | <10% errors, slow responses |
| SEV4 | Low impact | 24 hours | Intermittent issues, warnings |
Runbook 1: LLM Provider Outage
Detect
# Check provider status pages
curl -s https://status.openai.com/api/v2/status.json | jq '.status'
curl -s https://status.anthropic.com/api/v2/status.json | jq '.status'
Diagnose
async function diagnoseProviders() {
const results: Record<string, string> = {};
try {
const openai = new ChatOpenAI({ model: "gpt-4o-mini", timeout: 10000 });
await openai.invoke("ping");
results.openai = "OK";
} catch (e: any) {
results.openai = `FAIL: ${e.message.slice(0, 100)}`;
}
try {
const anthropic = new ChatAnthropic({ model: "claude-sonnet-4-20250514" });
await anthropic.invoke("ping");
results.anthropic = "OK";
} catch (e: any) {
results.anthropic = `FAIL: ${e.message.slice(0, 100)}`;
}
console.table(results);
return results;
}
Mitigate
// Enable fallback — switch to healthy provider
const primary = new ChatOpenAI({
model: "gpt-4o-mini",
maxRetries: 1,
timeout: 5000,
});
const fallback = new ChatAnthropic({
model: "claude-sonnet-4-20250514",
maxRetries: 1,
});
const resilientModel = primary.withFallbacks({
fallbacks: [fallback],
});
// All chains using resilientModel auto-failover
Recover
- Monitor provider status page for resolution
- Verify primary provider works:
await diagnoseProviders() - Remove fallback config (or keep it for resilience)
- Document incident timeline for post-mortem
Runbook 2: High Error Rate
Detect
# Check LangSmith for error spike
# https://smith.langchain.com/o/YOUR_ORG/projects/YOUR_PROJECT/runs?filter=error:true
# Check application logs
grep -c "Error\|error\|ERROR" /var/log/app/langchain.log | tail -5
Diagnose
// Common error patterns
const ERROR_CAUSES: Record<string, string> = {
"RateLimitError": "API quota exceeded -> reduce concurrency",
"AuthenticationError": "API key invalid -> check secrets",
"Timeout": "Provider slow -> increase timeout",
"OutputParserException": "LLM output format changed -> check prompts",
"ValidationError": "Schema mismatch -> update Zod schemas",
"ContextLengthExceeded": "Input too long -> truncate or chunk",
};
Mitigate
// 1. Reduce load
// Lower maxConcurrency on batch operations
// 2. Enable caching for repeated queries
const cache = new Map();
async function withCache(chain: any, input: any) {
const key = JSON.stringify(input);
if (cache.has(key)) return cache.get(key);
const result = await chain.invoke(input);
cache.set(key, result);
return result;
}
// 3. Enable fallback model
const model = primary.withFallbacks({ fallbacks: [fallback] });
Runbook 3: Latency Spike
Detect
# Prometheus query
histogram_quantile(0.95, rate(langchain_llm_latency_seconds_bucket[5m])) > 5
Diagnose
// Measure per-component latency
const tracer = new MetricsCallback();
await chain.invoke({ input: "test" }, { callbacks: [tracer] });
console.table(tracer.getReport());
// Check: is it the LLM, retriever, or tool that's slow?
Mitigate
- Switch to faster model:
gpt-4o-mini(200ms TTFT) vsgpt-4o(400ms) - Enable streaming to reduce perceived latency
- Enable caching for repeated queries
- Reduce context length (shorter prompts)
Runbook 4: Cost Overrun
Detect
# Check OpenAI usage dashboard
# https://platform.openai.com/usage
Mitigate
// 1. Emergency model downgrade
// gpt-4o ($2.50/1M) -> gpt-4o-mini ($0.15/1M) = 17x cheaper
// 2. Enable budget enforcement
const budget = new BudgetEnforcer(50.0); // $50 daily limit
const model = new ChatOpenAI({
model: "gpt-4o-mini",
callbacks: [budget],
});
// 3. Enable aggressive caching
// (see langchain-cost-tuning skill)
Runbook 5: Memory/OOM Issues
Detect
# Check process memory
ps aux --sort=-%mem | head -5
# Node.js heap stats
node -e "console.log(process.memoryUsage())"
Mitigate
- Clear caches: reset in-memory caches
- Reduce batch sizes: lower
maxConcurrency - Use streaming instead of accumulating full responses
- Restart pods:
kubectl rollout restart deployment/langchain-api
Incident Response Checklist
During Incident
- Acknowledge in incident channel
- Classify severity (SEV1-4)
- Check provider status pages
- Run diagnostic script
- Apply mitigation (fallback/cache/throttle)
- Communicate status to stakeholders
- Document timeline
Post-Incident
- Verify full recovery
- Schedule post-mortem (within 48h)
- Write incident report
- Create follow-up tickets
- Update monitoring/alerting rules
- Update this runbook if needed
Resources
Next Steps
Use langchain-debug-bundle for detailed evidence collection during incidents.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversAppSignal: real-time monitoring with incident tracking, anomaly detection, performance metrics and log analysis for fast
Integrate with Panther Labs to streamline cybersecurity workflows, manage detection rules, triage alerts, and boost inci
Integrate Rootly's incident management API for real-time production issue resolution in code editors with efficient cont
Safely connect cloud Grafana to AI agents with MCP: query, inspect, and manage Grafana resources using simple, focused o
Control Ableton Live for advanced music production—track creation, MIDI editing, playback, and sound design. Perfect for
Boost Postgres performance with Postgres MCP Pro—AI-driven index tuning, health checks, and safe, intelligent SQL optimi
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.