databricks-debug-bundle

0
0
Source

Collect Databricks debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information for Databricks problems. Trigger with phrases like "databricks debug", "databricks support bundle", "collect databricks logs", "databricks diagnostic".

Install

mkdir -p .claude/skills/databricks-debug-bundle && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6356" && unzip -o skill.zip -d .claude/skills/databricks-debug-bundle && rm skill.zip

Installs to .claude/skills/databricks-debug-bundle

About this skill

Databricks Debug Bundle

Current State

!databricks --version 2>/dev/null || echo 'CLI not installed' !python3 -c "import databricks.sdk; print(f'SDK {databricks.sdk.__version__}')" 2>/dev/null || echo 'SDK not installed'

Overview

Collect all diagnostic information needed for Databricks support tickets: environment info, cluster state, cluster events, job run details, Spark driver logs, and Delta table history. Produces a redacted tar.gz bundle safe to share with support.

Prerequisites

  • Databricks CLI installed and configured
  • Access to cluster logs (admin or cluster owner)
  • Permission to access job run details

Instructions

Step 1: Create Debug Collection Script

#!/bin/bash
set -euo pipefail
# databricks-debug-bundle.sh [cluster_id] [run_id] [table_name]

BUNDLE_DIR="databricks-debug-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BUNDLE_DIR"

CLUSTER_ID="${1:-}"
RUN_ID="${2:-}"
TABLE_NAME="${3:-}"

echo "=== Databricks Debug Bundle ===" | tee "$BUNDLE_DIR/summary.txt"
echo "Generated: $(date -u +%Y-%m-%dT%H:%M:%SZ)" >> "$BUNDLE_DIR/summary.txt"
echo "Workspace: ${DATABRICKS_HOST:-unset}" >> "$BUNDLE_DIR/summary.txt"

Step 2: Collect Environment Info

{
    echo ""
    echo "--- Environment ---"
    echo "CLI: $(databricks --version 2>&1)"
    echo "SDK: $(pip show databricks-sdk 2>/dev/null | grep Version || echo 'not installed')"
    echo "Python: $(python3 --version 2>&1)"
    echo "OS: $(uname -srm)"
    echo ""
    echo "--- Current User ---"
    databricks current-user me --output json 2>&1 | jq '{userName, active}' || echo "Auth failed"
} >> "$BUNDLE_DIR/summary.txt"

Step 3: Collect Cluster Information

if [ -n "$CLUSTER_ID" ]; then
    echo "" >> "$BUNDLE_DIR/summary.txt"
    echo "--- Cluster: $CLUSTER_ID ---" >> "$BUNDLE_DIR/summary.txt"

    # Full cluster config
    databricks clusters get --cluster-id "$CLUSTER_ID" --output json \
        > "$BUNDLE_DIR/cluster_config.json" 2>&1

    # Key fields summary
    jq '{state, spark_version, node_type_id, num_workers,
         autotermination_minutes, termination_reason}' \
        "$BUNDLE_DIR/cluster_config.json" >> "$BUNDLE_DIR/summary.txt"

    # Recent cluster events (state changes, errors, resizing)
    databricks clusters events --cluster-id "$CLUSTER_ID" --limit 30 --output json \
        > "$BUNDLE_DIR/cluster_events.json" 2>&1

    # Extract event timeline
    jq -r '.events[]? | "\(.timestamp): \(.type) — \(.details // "no details")"' \
        "$BUNDLE_DIR/cluster_events.json" >> "$BUNDLE_DIR/summary.txt" 2>/dev/null
fi

Step 4: Collect Job Run Information

if [ -n "$RUN_ID" ]; then
    echo "" >> "$BUNDLE_DIR/summary.txt"
    echo "--- Run: $RUN_ID ---" >> "$BUNDLE_DIR/summary.txt"

    # Full run details
    databricks runs get --run-id "$RUN_ID" --output json \
        > "$BUNDLE_DIR/run_details.json" 2>&1

    # Run state summary
    jq '{state: .state, start_time, end_time, run_duration}' \
        "$BUNDLE_DIR/run_details.json" >> "$BUNDLE_DIR/summary.txt"

    # Task-level breakdown
    jq -r '.tasks[]? | "  Task \(.task_key): \(.state.result_state // "RUNNING") — \(.state.state_message // "ok")"' \
        "$BUNDLE_DIR/run_details.json" >> "$BUNDLE_DIR/summary.txt"

    # Run output (error messages, stdout)
    databricks runs get-output --run-id "$RUN_ID" --output json \
        > "$BUNDLE_DIR/run_output.json" 2>&1

    jq '{error, error_trace: (.error_trace // "" | .[0:2000])}' \
        "$BUNDLE_DIR/run_output.json" >> "$BUNDLE_DIR/summary.txt" 2>/dev/null
fi

Step 5: Collect Spark Driver Logs

if [ -n "$CLUSTER_ID" ]; then
    echo "" >> "$BUNDLE_DIR/summary.txt"
    echo "--- Spark Driver Logs (last 500 lines) ---" >> "$BUNDLE_DIR/summary.txt"

    python3 << 'PYEOF' > "$BUNDLE_DIR/driver_logs.txt" 2>&1
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
try:
    content = w.dbfs.read("/cluster-logs/${CLUSTER_ID}/driver/log4j-active.log")
    # Take last 500 lines
    lines = content.data.decode().splitlines()[-500:]
    print("\n".join(lines))
except Exception as e:
    print(f"Could not fetch driver logs: {e}")
    print("Tip: Enable cluster log delivery in cluster config for persistent logs")
PYEOF
fi

Step 6: Collect Delta Table Diagnostics

if [ -n "$TABLE_NAME" ]; then
    echo "" >> "$BUNDLE_DIR/summary.txt"
    echo "--- Delta Table: $TABLE_NAME ---" >> "$BUNDLE_DIR/summary.txt"

    python3 << PYEOF > "$BUNDLE_DIR/delta_diagnostics.txt" 2>&1
from databricks.connect import DatabricksSession
spark = DatabricksSession.builder.getOrCreate()

print("=== Table Details ===")
spark.sql("DESCRIBE DETAIL ${TABLE_NAME}").show(truncate=False)

print("\n=== Recent History (last 20 operations) ===")
spark.sql("DESCRIBE HISTORY ${TABLE_NAME} LIMIT 20").show(truncate=False)

print("\n=== Schema ===")
spark.sql("DESCRIBE ${TABLE_NAME}").show(truncate=False)

print("\n=== File Stats ===")
detail = spark.sql("DESCRIBE DETAIL ${TABLE_NAME}").first()
print(f"Files: {detail.numFiles}, Size: {detail.sizeInBytes / 1024 / 1024:.1f} MB")
PYEOF
fi

Step 7: Package Bundle (Redacted)

# Redact sensitive data from config snapshot
echo "" >> "$BUNDLE_DIR/summary.txt"
echo "--- Config (redacted) ---" >> "$BUNDLE_DIR/summary.txt"
if [ -f ~/.databrickscfg ]; then
    sed 's/token = .*/token = ***REDACTED***/' \
        ~/.databrickscfg > "$BUNDLE_DIR/config-redacted.txt"
    sed -i 's/client_secret = .*/client_secret = ***REDACTED***/' \
        "$BUNDLE_DIR/config-redacted.txt"
fi

# Network connectivity test
echo "--- Network ---" >> "$BUNDLE_DIR/summary.txt"
echo -n "API reachable: " >> "$BUNDLE_DIR/summary.txt"
curl -s -o /dev/null -w "%{http_code}" \
    "${DATABRICKS_HOST}/api/2.0/clusters/list" \
    -H "Authorization: Bearer ${DATABRICKS_TOKEN}" >> "$BUNDLE_DIR/summary.txt"
echo "" >> "$BUNDLE_DIR/summary.txt"

# Create archive
tar -czf "$BUNDLE_DIR.tar.gz" "$BUNDLE_DIR"
rm -rf "$BUNDLE_DIR"

echo ""
echo "Bundle created: $BUNDLE_DIR.tar.gz"
echo "Contents: summary.txt, cluster_config.json, cluster_events.json,"
echo "  run_details.json, run_output.json, driver_logs.txt,"
echo "  delta_diagnostics.txt, config-redacted.txt"

Output

  • databricks-debug-YYYYMMDD-HHMMSS.tar.gz containing:
    • summary.txt — Human-readable diagnostic summary
    • cluster_config.json — Full cluster configuration
    • cluster_events.json — State changes, errors, resizing events
    • run_details.json — Job run with task-level breakdown
    • run_output.json — Stdout/stderr and error traces
    • driver_logs.txt — Last 500 lines of Spark driver log
    • delta_diagnostics.txt — Table details, history, schema
    • config-redacted.txt — CLI config with secrets removed

Error Handling

ItemIncludedNotes
Tokens/secretsNEVERRedacted with ***REDACTED***
PII in logsReview before sharingScan driver_logs.txt manually
Cluster IDsYesSafe to share with support
Error tracesYesCheck for embedded connection strings

Examples

Usage

# Environment only
bash databricks-debug-bundle.sh

# With cluster diagnostics
bash databricks-debug-bundle.sh 0123-456789-abcde

# With cluster + job run
bash databricks-debug-bundle.sh 0123-456789-abcde 12345

# Full diagnostics including Delta table
bash databricks-debug-bundle.sh 0123-456789-abcde 12345 catalog.schema.table

Submit to Support

  1. Generate bundle: bash databricks-debug-bundle.sh [args]
  2. Review summary.txt for sensitive data
  3. Open ticket at help.databricks.com
  4. Attach the .tar.gz bundle
  5. Include workspace ID (found in workspace URL: adb-<workspace-id>)

Resources

Next Steps

For rate limit issues, see databricks-rate-limits.

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

6814

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

2312

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

379

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

978

performing-security-audits

jeremylongshore

This skill allows Claude to conduct comprehensive security audits of code, infrastructure, and configurations. It leverages various tools within the security-pro-pack plugin, including vulnerability scanning, compliance checking, cryptography review, and infrastructure security analysis. Use this skill when a user requests a "security audit," "vulnerability assessment," "compliance review," or any task involving identifying and mitigating security risks. It helps to ensure code and systems adhere to security best practices and compliance standards.

86

django-view-generator

jeremylongshore

Generate django view generator operations. Auto-activating skill for Backend Development. Triggers on: django view generator, django view generator Part of the Backend Development skill category. Use when working with django view generator functionality. Trigger with phrases like "django view generator", "django generator", "django".

15

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.