databricks-incident-runbook

0
0
Source

Execute Databricks incident response procedures with triage, mitigation, and postmortem. Use when responding to Databricks-related outages, investigating job failures, or running post-incident reviews for pipeline failures. Trigger with phrases like "databricks incident", "databricks outage", "databricks down", "databricks on-call", "databricks emergency", "job failed".

Install

mkdir -p .claude/skills/databricks-incident-runbook && curl -L -o skill.zip "https://mcp.directory/api/skills/download/9446" && unzip -o skill.zip -d .claude/skills/databricks-incident-runbook && rm skill.zip

Installs to .claude/skills/databricks-incident-runbook

About this skill

Databricks Incident Runbook

Overview

Rapid incident response for Databricks: triage script, decision tree, immediate actions by error type, communication templates, evidence collection, and postmortem template. Designed for on-call engineers to follow during live incidents.

Severity Levels

LevelDefinitionResponse TimeExamples
P1Production pipeline down< 15 minCritical ETL failed, data not updating
P2Degraded performance< 1 hourSlow queries, partial failures, stale data
P3Non-critical issues< 4 hoursDev cluster issues, non-critical job delays
P4No user impactNext business dayMonitoring gaps, cleanup needed

Instructions

Step 1: Quick Triage (Run First)

#!/bin/bash
set -euo pipefail
echo "=== DATABRICKS TRIAGE $(date -u +%H:%M:%S\ UTC) ==="

# 1. Is Databricks itself down?
echo "--- Platform Status ---"
curl -s https://status.databricks.com/api/v2/status.json | \
  jq -r '.status.description // "UNKNOWN"'

# 2. Can we reach the workspace?
echo "--- Workspace ---"
if databricks current-user me --output json 2>/dev/null | jq -r .userName; then
    echo "API: CONNECTED"
else
    echo "API: UNREACHABLE — check VPN/firewall/token"
fi

# 3. Recent failures
echo "--- Failed Runs (last 1h) ---"
databricks runs list --limit 20 --output json 2>/dev/null | \
  jq -r '.runs[]? | select(.state.result_state == "FAILED") |
    "\(.run_id): \(.run_name // "unnamed") — \(.state.state_message // "no message")"' || \
  echo "Could not fetch runs"

# 4. Cluster health
echo "--- Clusters in ERROR state ---"
databricks clusters list --output json 2>/dev/null | \
  jq -r '.[]? | select(.state == "ERROR") |
    "\(.cluster_id): \(.cluster_name) — \(.termination_reason.code // "unknown")"' || \
  echo "Could not fetch clusters"

Step 2: Decision Tree

Is the issue affecting production data pipelines?
├─ YES: Is it a single job or multiple?
│   ├─ SINGLE JOB
│   │   ├─ Cluster failed to start → Step 3a
│   │   ├─ Code/logic error → Step 3b
│   │   ├─ Data quality issue → Step 3c
│   │   └─ Permission error → Step 3d
│   │
│   └─ MULTIPLE JOBS → Likely infrastructure
│       ├─ Check platform status (status.databricks.com)
│       ├─ Check workspace quotas (Admin Console)
│       └─ Check network/VPN connectivity
│
└─ NO: Is it performance?
    ├─ Slow queries → Check query plan, warehouse sizing
    ├─ Slow cluster startup → Check instance availability
    └─ Data freshness → Check upstream dependencies

Step 3a: Cluster Failed to Start

CLUSTER_ID="your-cluster-id"

# Get termination reason
databricks clusters get --cluster-id $CLUSTER_ID | \
  jq '{state, termination_reason}'

# Check recent events
databricks clusters events --cluster-id $CLUSTER_ID --limit 10 | \
  jq '.events[] | "\(.timestamp): \(.type) — \(.details // "none")"'

# Common fixes:
# QUOTA_EXCEEDED → Terminate idle clusters
# CLOUD_PROVIDER_LAUNCH_FAILURE → Check instance availability in region
# DRIVER_UNREACHABLE → Network/security group issue

# Quick fix: restart
databricks clusters start --cluster-id $CLUSTER_ID

Step 3b: Code/Logic Error

RUN_ID="your-run-id"

# Get run details and error
databricks runs get --run-id $RUN_ID | jq '{
  state: .state,
  tasks: [.tasks[]? | {key: .task_key, result: .state.result_state, error: .state.state_message}]
}'

# Get task output for failed tasks
databricks runs get-output --run-id $RUN_ID | jq '{
  error: .error,
  trace: (.error_trace // "" | .[0:1000])
}'

# Repair failed tasks only (skip successful ones)
databricks runs repair --run-id $RUN_ID --rerun-tasks FAILED

Step 3c: Data Quality Issue

-- Quick data sanity check
SELECT COUNT(*) AS total_rows,
       COUNT(DISTINCT id) AS unique_ids,
       SUM(CASE WHEN amount IS NULL THEN 1 ELSE 0 END) AS null_amounts,
       MIN(created_at) AS oldest,
       MAX(created_at) AS newest
FROM prod_catalog.silver.orders
WHERE created_at > current_timestamp() - INTERVAL 1 DAY;

-- Check recent table changes
DESCRIBE HISTORY prod_catalog.silver.orders LIMIT 10;

-- Restore to previous version if corrupted
RESTORE TABLE prod_catalog.silver.orders TO VERSION AS OF 5;

Step 3d: Permission Error

# Check current user
databricks current-user me

# Check job permissions
databricks permissions get jobs --job-id $JOB_ID

# Fix permissions
databricks permissions update jobs --job-id $JOB_ID --json '{
  "access_control_list": [{
    "user_name": "[email protected]",
    "permission_level": "CAN_MANAGE_RUN"
  }]
}'

Step 4: Communication

Internal (Slack)

:red_circle: **P1 INCIDENT: [Brief Description]**

**Status:** INVESTIGATING
**Impact:** [What data/users are affected]
**Started:** [Time UTC]
**Current Action:** [What you're doing now]
**Next Update:** [+30 min]

**IC:** @[your-name]

External (Status Page)

**Data Pipeline Delay**
We are experiencing delays in data processing.
Dashboard data may be up to [X] hours stale.
Started: [Time] UTC
Status: Actively investigating
Next update: [Time] UTC

Step 5: Evidence Collection

#!/bin/bash
INCIDENT_ID=$1
RUN_ID=$2
CLUSTER_ID=$3

mkdir -p "incident-$INCIDENT_ID"

# Collect everything
databricks runs get --run-id $RUN_ID --output json > "incident-$INCIDENT_ID/run.json" 2>&1
databricks runs get-output --run-id $RUN_ID --output json > "incident-$INCIDENT_ID/output.json" 2>&1

if [ -n "$CLUSTER_ID" ]; then
    databricks clusters get --cluster-id $CLUSTER_ID --output json > "incident-$INCIDENT_ID/cluster.json" 2>&1
    databricks clusters events --cluster-id $CLUSTER_ID --limit 50 --output json > "incident-$INCIDENT_ID/events.json" 2>&1
fi

tar -czf "incident-$INCIDENT_ID.tar.gz" "incident-$INCIDENT_ID"
echo "Evidence: incident-$INCIDENT_ID.tar.gz"

Step 6: Postmortem Template

## Incident: [Title]

**Date:** YYYY-MM-DD | **Duration:** Xh Ym | **Severity:** P[1-4]
**IC:** [Name]

### Summary
[1-2 sentences: what happened and what was the impact]

### Timeline (UTC)
| Time | Event |
|------|-------|
| HH:MM | Alert fired / issue detected |
| HH:MM | Investigation started |
| HH:MM | Root cause identified |
| HH:MM | Mitigation applied |
| HH:MM | Resolved |

### Root Cause
[Technical explanation]

### Impact
- Tables affected: [list]
- Data staleness: [hours]
- Users affected: [count/teams]

### Action Items
| Priority | Action | Owner | Due |
|----------|--------|-------|-----|
| P1 | [Preventive fix] | [Name] | [Date] |
| P2 | [Monitoring gap] | [Name] | [Date] |

Output

  • Issue triaged and severity assigned
  • Root cause identified via decision tree
  • Immediate remediation applied
  • Stakeholders notified with structured updates
  • Evidence collected for postmortem

Error Handling

IssueCauseSolution
Can't reach APIToken expired or VPN downRe-auth: databricks auth login
runs repair failsRun too old for repairCreate new run with same config
RESTORE TABLE failsVACUUM already cleaned old versionsRestore from backup or replay pipeline
Cluster restart loopsInit script failingCheck cluster events for init script errors

Examples

One-Line Health Checks

# Last 5 runs for a job
databricks runs list --job-id $JID --limit 5 | jq '.runs[] | "\(.state.result_state): \(.run_name)"'

# Quick cluster restart
databricks clusters restart --cluster-id $CID && echo "Restart initiated"

# Cancel all active runs for a job
databricks runs list --job-id $JID --active-only | jq -r '.runs[].run_id' | \
  xargs -I{} databricks runs cancel --run-id {}

Resources

Next Steps

For data handling and compliance, see databricks-data-handling.

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

6832

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

9229

automating-mobile-app-testing

jeremylongshore

This skill enables automated testing of mobile applications on iOS and Android platforms using frameworks like Appium, Detox, XCUITest, and Espresso. It generates end-to-end tests, sets up page object models, and handles platform-specific elements. Use this skill when the user requests mobile app testing, test automation for iOS or Android, or needs assistance with setting up device farms and simulators. The skill is triggered by terms like "mobile testing", "appium", "detox", "xcuitest", "espresso", "android test", "ios test".

16122

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

5015

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

12014

ollama-setup

jeremylongshore

Configure auto-configure Ollama when user needs local LLM deployment, free AI alternatives, or wants to eliminate hosted API costs. Trigger phrases: "install ollama", "local AI", "free LLM", "self-hosted AI", "replace OpenAI", "no API costs". Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

5210

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,4211,305

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

1,2361,030

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

9151,017

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

975666

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

979609

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

1,046505

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.