databricks-observability
Set up comprehensive observability for Databricks with metrics, traces, and alerts. Use when implementing monitoring for Databricks jobs, setting up dashboards, or configuring alerting for pipeline health. Trigger with phrases like "databricks monitoring", "databricks metrics", "databricks observability", "monitor databricks", "databricks alerts", "databricks logging".
Install
mkdir -p .claude/skills/databricks-observability && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2744" && unzip -o skill.zip -d .claude/skills/databricks-observability && rm skill.zipInstalls to .claude/skills/databricks-observability
About this skill
Databricks Observability
Overview
Monitor Databricks job runs, cluster utilization, query performance, and costs using system tables and the Databricks SDK. Databricks exposes observability data through system tables in the system catalog (audit logs, billing, compute, query history) and real-time Ganglia metrics on clusters.
Prerequisites
- Databricks Premium or Enterprise with Unity Catalog enabled
- Access to
system.billing,system.compute, andsystem.accesscatalogs - SQL warehouse or cluster for running monitoring queries
Instructions
Step 1: Monitor Job Health via System Tables
-- Failed jobs in the last 24 hours with error details
SELECT job_id, run_name, result_state, start_time, end_time,
TIMESTAMPDIFF(MINUTE, start_time, end_time) AS duration_min,
error_message
FROM system.lakeflow.job_run_timeline
WHERE result_state = 'FAILED'
AND start_time > current_timestamp() - INTERVAL 24 HOURS
ORDER BY start_time DESC;
Step 2: Track Cluster Utilization and Costs
-- DBU consumption by cluster over the last 7 days
SELECT cluster_id, cluster_name, sku_name,
SUM(usage_quantity) AS total_dbus,
SUM(usage_quantity * list_price) AS estimated_cost_usd
FROM system.billing.usage
WHERE usage_date >= current_date() - INTERVAL 7 DAYS
GROUP BY cluster_id, cluster_name, sku_name
ORDER BY estimated_cost_usd DESC
LIMIT 20;
Step 3: Monitor SQL Warehouse Performance
-- Slow queries (>30s) on SQL warehouses
SELECT warehouse_id, statement_id, executed_by,
total_duration_ms / 1000 AS duration_sec, # 1000: 1 second in ms
rows_produced, bytes_scanned_mb
FROM system.query.history
WHERE total_duration_ms > 30000 # 30000: 30 seconds in ms
AND start_time > current_timestamp() - INTERVAL 24 HOURS
ORDER BY total_duration_ms DESC
LIMIT 50;
Step 4: Set Up Alerts with Databricks SQL Alerts
-- Create alert: notify if any job fails more than 3 times in an hour
-- In Databricks SQL > Alerts > New Alert:
-- Query:
SELECT COUNT(*) AS failure_count
FROM system.lakeflow.job_run_timeline
WHERE result_state = 'FAILED'
AND start_time > current_timestamp() - INTERVAL 1 HOUR;
-- Trigger when: failure_count > 3
-- Notification: Slack webhook or email
Step 5: Export Metrics to External Systems
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
# Export cluster metrics to Prometheus via pushgateway
for cluster in w.clusters.list():
if cluster.state == 'RUNNING':
events = w.clusters.events(cluster.cluster_id, limit=10)
# Push utilization metrics to your monitoring stack
push_metric('databricks_cluster_state', 1, labels={'cluster': cluster.cluster_name, 'state': cluster.state.value})
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| System tables empty | Unity Catalog not enabled | Enable Unity Catalog for the workspace |
| Query history missing | Serverless warehouse not tracked | Use classic SQL warehouse or check retention |
| Billing data delayed | System table lag (up to 24h) | Use for trend analysis, not real-time alerting |
| Cluster metrics gaps | Cluster was terminated | Check terminated cluster events in audit log |
Examples
Daily Health Dashboard Query
-- Single-pane health summary for daily standups
SELECT
COUNT(CASE WHEN result_state = 'SUCCESS' THEN 1 END) AS succeeded,
COUNT(CASE WHEN result_state = 'FAILED' THEN 1 END) AS failed,
COUNT(CASE WHEN result_state = 'TIMED_OUT' THEN 1 END) AS timed_out,
ROUND(100.0 * COUNT(CASE WHEN result_state = 'SUCCESS' THEN 1 END) / COUNT(*), 1) AS success_rate_pct,
ROUND(AVG(TIMESTAMPDIFF(MINUTE, start_time, end_time)), 1) AS avg_duration_min
FROM system.lakeflow.job_run_timeline
WHERE start_time > current_timestamp() - INTERVAL 24 HOURS;
Cost-per-Job Breakdown
SELECT j.name AS job_name,
COUNT(*) AS run_count,
ROUND(SUM(b.usage_quantity), 1) AS total_dbus,
ROUND(SUM(b.usage_quantity * b.list_price), 2) AS total_cost_usd
FROM system.lakeflow.job_run_timeline r
JOIN system.lakeflow.jobs j ON r.job_id = j.job_id
LEFT JOIN system.billing.usage b ON r.cluster_id = b.cluster_id
WHERE r.start_time > current_timestamp() - INTERVAL 7 DAYS
GROUP BY j.name
ORDER BY total_cost_usd DESC
LIMIT 15;
Output
- Job health dashboard showing success/failure rates over time
- Top cost drivers ranked by DBU consumption
- Slow query report identifying warehouses needing right-sizing
- SQL alerts for automated failure notifications
- External metric export pipeline for Prometheus/Grafana integration
Resources
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversOptimize Facebook ad campaigns with AI-driven insights, creative analysis, and campaign control in Meta Ads Manager for
AkTools MCP Server — comprehensive stock market data and crypto market data with price history, technical indicators, fi
Logfire is a data observability platform for querying, analyzing, and monitoring OpenTelemetry traces, errors, and metri
Integrate Dynatrace, a leading data observability platform and APM tool, to monitor metrics, security, and network perfo
Dynatrace Managed MCP Server delivers AI-driven access to self-hosted monitoring and observability platform, AIOps insig
Access AgentOps data for agent debugging: retrieve project info, trace details, span metrics, and execution traces via a
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.