databricks-prod-checklist
Execute Databricks production deployment checklist and rollback procedures. Use when deploying Databricks jobs to production, preparing for launch, or implementing go-live procedures. Trigger with phrases like "databricks production", "deploy databricks", "databricks go-live", "databricks launch checklist".
Install
mkdir -p .claude/skills/databricks-prod-checklist && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4771" && unzip -o skill.zip -d .claude/skills/databricks-prod-checklist && rm skill.zipInstalls to .claude/skills/databricks-prod-checklist
About this skill
Databricks Production Checklist
Overview
Complete checklist for deploying Databricks jobs and pipelines to production. Covers security hardening, infrastructure validation, code quality gates, job configuration, deployment commands, monitoring setup, and rollback procedures.
Prerequisites
- Staging environment tested and verified
- Production workspace access with service principal
- Unity Catalog configured with prod catalogs
- Monitoring and alerting ready (see
databricks-observability)
Instructions
Step 1: Pre-Deployment Security
- Service principal configured for automated runs (not personal PAT)
- Secrets in Databricks Secret Scopes (not env vars or hardcoded)
- Token expiration set (max 90 days)
- Unity Catalog grants follow least privilege
- Cluster policies enforced for cost/security guardrails
- IP access lists configured in Admin Console
- Audit logging verified via
system.access.audit
Step 2: Infrastructure Validation
- Instance pool created for fast cluster startup
- Node types validated for workload (compute-optimized for streaming, memory-optimized for ML)
- Autoscaling configured with sensible min/max workers
- Spot instances enabled for worker nodes (on-demand for driver)
- Auto-termination disabled for job clusters (they terminate on completion)
# Verify infrastructure
databricks clusters list-node-types --output json | jq '.[0:5] | .[].node_type_id'
databricks instance-pools list --output json | jq '.[] | {id: .instance_pool_id, name: .instance_pool_name}'
Step 3: Code Quality Gates
- Unit tests passing locally (
pytest tests/unit/) - Integration tests passing on staging data
- No
.collect()on large datasets - No hardcoded credentials or paths
- Error handling covers all failure modes
- Delta Lake best practices: MERGE for upserts, OPTIMIZE scheduled
- Logging is production-appropriate (structured, no PII)
# Run tests and validate bundle
pytest tests/ -v --tb=short
databricks bundle validate -t prod
Step 4: Job Configuration
# resources/prod_etl.yml
resources:
jobs:
prod_etl_pipeline:
name: "prod-etl-pipeline"
tags:
environment: production
team: data-engineering
cost_center: analytics
schedule:
quartz_cron_expression: "0 0 6 * * ?"
timezone_id: "America/New_York"
email_notifications:
on_failure: ["oncall@company.com"]
on_success: ["data-team@company.com"]
webhook_notifications:
on_failure:
- id: "slack-notification-destination-id"
max_concurrent_runs: 1
timeout_seconds: 14400 # 4 hours
tasks:
- task_key: bronze_ingest
job_cluster_key: etl_cluster
notebook_task:
notebook_path: src/pipelines/bronze.py
timeout_seconds: 3600
- task_key: silver_transform
depends_on: [{task_key: bronze_ingest}]
job_cluster_key: etl_cluster
notebook_task:
notebook_path: src/pipelines/silver.py
- task_key: gold_aggregate
depends_on: [{task_key: silver_transform}]
job_cluster_key: etl_cluster
notebook_task:
notebook_path: src/pipelines/gold.py
job_clusters:
- job_cluster_key: etl_cluster
new_cluster:
spark_version: "14.3.x-scala2.12"
node_type_id: "i3.xlarge"
autoscale:
min_workers: 2
max_workers: 8
spark_conf:
spark.sql.shuffle.partitions: "200"
spark.databricks.delta.optimizeWrite.enabled: "true"
spark.databricks.delta.autoCompact.enabled: "true"
aws_attributes:
availability: SPOT_WITH_FALLBACK
first_on_demand: 1
Step 5: Deploy
# Pre-flight checks
echo "=== Pre-flight ==="
databricks bundle validate -t prod
databricks workspace list /Shared/.bundle/ 2>/dev/null || echo "First deploy"
databricks secrets list-scopes | grep prod
# Deploy
echo "=== Deploying ==="
databricks bundle deploy -t prod
# Verify deployment
databricks bundle summary -t prod
# Trigger verification run
echo "=== Verification ==="
RUN_ID=$(databricks bundle run prod_etl_pipeline -t prod --output json | jq -r '.run_id')
echo "Verification run: $RUN_ID"
# Wait and check result
databricks runs get --run-id $RUN_ID --output json | jq '.state'
Step 6: Post-Deploy Monitoring
from databricks.sdk import WorkspaceClient
from datetime import datetime
w = WorkspaceClient()
def check_job_health(job_id: int) -> dict:
"""Post-deploy health check."""
runs = list(w.jobs.list_runs(job_id=job_id, completed_only=True, limit=10))
if not runs:
return {"status": "NO_RUNS", "healthy": False}
successful = sum(1 for r in runs if r.state.result_state.value == "SUCCESS")
success_rate = successful / len(runs)
durations = [
(r.end_time - r.start_time) / 60000
for r in runs if r.end_time and r.start_time
]
avg_duration = sum(durations) / len(durations) if durations else 0
return {
"healthy": success_rate > 0.9 and runs[0].state.result_state.value == "SUCCESS",
"success_rate": f"{success_rate:.0%}",
"avg_duration_min": f"{avg_duration:.1f}",
"last_run": runs[0].state.result_state.value,
"last_run_time": datetime.fromtimestamp(runs[0].start_time / 1000).isoformat(),
}
Step 7: Rollback Procedure
#!/bin/bash
set -euo pipefail
# rollback.sh <job_id>
JOB_ID=$1
echo "=== ROLLBACK: Job $JOB_ID ==="
# 1. Pause the schedule
echo "Pausing schedule..."
databricks jobs update --job-id $JOB_ID --json '{"settings": {"schedule": null}}'
# 2. Cancel any active runs
echo "Cancelling active runs..."
databricks runs list --job-id $JOB_ID --active-only --output json | \
jq -r '.runs[]?.run_id' | \
xargs -I {} databricks runs cancel --run-id {}
# 3. Redeploy previous bundle version
echo "Redeploying previous version..."
git checkout HEAD~1 -- resources/ src/
databricks bundle deploy -t prod
# 4. Restore schedule
echo "Re-enabling schedule..."
databricks jobs reset --job-id $JOB_ID --json-file resources/prod_etl.json
# 5. Trigger verification
echo "Running verification..."
databricks jobs run-now --job-id $JOB_ID
echo "=== ROLLBACK COMPLETE ==="
Output
- Pre-deployment checklist verified
- Production job deployed via Asset Bundles
- Verification run completed successfully
- Monitoring health check operational
- Rollback procedure documented and tested
Error Handling
| Alert | Condition | Severity | Action |
|---|---|---|---|
| Job Failed | result_state = FAILED | P1 | Page oncall, check get_run_output |
| Long Running | Duration > 2x average | P2 | Investigate cluster sizing |
| 3+ Consecutive Failures | Success rate drops below 70% | P1 | Trigger rollback |
| Data Quality Failed | DLT expectations failed | P2 | Check source data quality |
Examples
Production Health Dashboard
SELECT job_id, job_name,
COUNT(*) AS total_runs,
SUM(CASE WHEN result_state = 'SUCCESS' THEN 1 ELSE 0 END) AS successes,
ROUND(AVG(execution_duration) / 60000, 1) AS avg_minutes,
MAX(start_time) AS last_run
FROM system.lakeflow.job_run_timeline
WHERE start_time > current_timestamp() - INTERVAL 7 DAYS
GROUP BY job_id, job_name
ORDER BY total_runs DESC;
Resources
Next Steps
For version upgrades, see databricks-upgrade-migration.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversThirdweb — Read/write across 2,000+ blockchains: query data, analyze/deploy contracts, and execute transactions with a p
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Terminal control, file system search, and diff-based file editing for Claude and other AI assistants. Execute shell comm
XcodeBuild streamlines iOS app development for Apple developers with tools for building, debugging, and deploying iOS an
Connect Supabase projects to AI with Supabase MCP Server. Standardize LLM communication for secure, efficient developmen
Control Ableton Live for advanced music production—track creation, MIDI editing, playback, and sound design. Perfect for
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.