databricks-ci-integration
Configure Databricks CI/CD integration with GitHub Actions and Asset Bundles. Use when setting up automated testing, configuring CI pipelines, or integrating Databricks deployments into your build process. Trigger with phrases like "databricks CI", "databricks GitHub Actions", "databricks automated tests", "CI databricks", "databricks pipeline".
Install
mkdir -p .claude/skills/databricks-ci-integration && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8973" && unzip -o skill.zip -d .claude/skills/databricks-ci-integration && rm skill.zipInstalls to .claude/skills/databricks-ci-integration
About this skill
Databricks CI Integration
Overview
Automate Databricks deployments with Declarative Automation Bundles (DABs) and GitHub Actions. Covers bundle validation, unit testing PySpark transforms locally, deploying to staging on PR, production on merge, and integration testing against live workspaces. Uses databricks/setup-cli action and OAuth M2M for secure CI auth.
Prerequisites
- Databricks workspace with service principal (OAuth M2M)
- Asset Bundle (
databricks.yml) configured - GitHub repo with Actions enabled
- GitHub environment secrets:
DATABRICKS_HOST,DATABRICKS_CLIENT_ID,DATABRICKS_CLIENT_SECRET
Instructions
Step 1: GitHub Actions — Validate and Test on PR
# .github/workflows/databricks-ci.yml
name: Databricks CI
on:
pull_request:
paths: ['src/**', 'resources/**', 'databricks.yml', 'tests/**']
jobs:
validate-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Install dependencies
run: |
pip install pytest pyspark delta-spark databricks-sdk
pip install -e . # If using pyproject.toml
- name: Run unit tests (local Spark, no cluster needed)
run: pytest tests/unit/ -v --tb=short
- name: Install Databricks CLI
uses: databricks/setup-cli@main
- name: Validate bundle
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET }}
run: databricks bundle validate -t staging
deploy-staging:
needs: validate-and-test
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- name: Deploy to staging
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET }}
run: databricks bundle deploy -t staging
- name: Run integration tests on staging
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET }}
run: |
databricks bundle run integration_tests -t staging
# Verify output tables
databricks sql execute \
--warehouse-id "$WAREHOUSE_ID" \
--statement "SELECT COUNT(*) AS rows FROM staging_catalog.silver.orders WHERE date >= current_date() - 1"
Step 2: Unit Tests for PySpark Transforms
# tests/unit/test_transformations.py
import pytest
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType, DoubleType
@pytest.fixture(scope="session")
def spark():
return SparkSession.builder.master("local[*]").appName("tests").getOrCreate()
def test_silver_dedup(spark):
"""Test deduplication logic in silver layer."""
from src.pipelines.silver import dedup_orders
data = [
("order-1", "user-a", 10.0),
("order-1", "user-a", 10.0), # duplicate
("order-2", "user-b", 20.0),
]
schema = StructType([
StructField("order_id", StringType()),
StructField("user_id", StringType()),
StructField("amount", DoubleType()),
])
df = spark.createDataFrame(data, schema)
result = dedup_orders(df)
assert result.count() == 2
assert set(r.order_id for r in result.collect()) == {"order-1", "order-2"}
def test_gold_aggregation(spark):
"""Test daily aggregation in gold layer."""
from src.pipelines.gold import aggregate_daily_revenue
# ... test with sample data
Step 3: Deploy to Production on Merge
# .github/workflows/databricks-deploy.yml
name: Databricks Deploy
on:
push:
branches: [main]
paths: ['src/**', 'resources/**', 'databricks.yml']
jobs:
deploy-production:
runs-on: ubuntu-latest
environment: production # Requires approval if configured
concurrency:
group: databricks-prod-deploy
cancel-in-progress: false
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- name: Validate production bundle
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST_PROD }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID_PROD }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET_PROD }}
run: databricks bundle validate -t prod
- name: Deploy to production
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST_PROD }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID_PROD }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET_PROD }}
run: |
databricks bundle deploy -t prod
echo "## Deployment Summary" >> $GITHUB_STEP_SUMMARY
databricks bundle summary -t prod >> $GITHUB_STEP_SUMMARY
- name: Trigger smoke test
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST_PROD }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID_PROD }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET_PROD }}
run: databricks bundle run prod_etl_pipeline -t prod --no-wait
Step 4: OIDC Authentication (Keyless CI)
Eliminate long-lived secrets by using GitHub OIDC federation with Databricks.
# In GitHub Actions — no client_secret needed
jobs:
deploy:
permissions:
id-token: write # Required for OIDC
contents: read
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- name: Deploy with OIDC
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID }}
# No DATABRICKS_CLIENT_SECRET — uses GitHub OIDC token
ARM_USE_OIDC: true
run: databricks bundle deploy -t prod
Output
- CI workflow validating bundles and running unit tests on every PR
- Staging deployment with integration tests before merge
- Production deployment on merge to main with approval gate
- Concurrency control preventing parallel deployments
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Bundle validation fails | Invalid YAML or missing variables | Run databricks bundle validate locally first |
| Auth error in CI | Client secret expired | Regenerate OAuth secret or switch to OIDC |
| Integration test timeout | Cluster cold start | Use instance pools or increase timeout |
| Deploy conflict | Concurrent CI runs | Use concurrency group in GitHub Actions |
| PySpark import error | Missing pyspark in CI | Add to pip install step |
Examples
Local Validation Before Push
# Validate and dry-run before committing
databricks bundle validate -t staging
databricks bundle deploy -t staging --dry-run
pytest tests/unit/ -v
Branch-Based Development Targets
# databricks.yml — auto-name resources per developer
targets:
dev:
default: true
mode: development
# In dev mode, resources auto-prefixed with [dev username]
workspace:
root_path: /Users/${workspace.current_user.userName}/.bundle/${bundle.name}/dev
Resources
Next Steps
For Asset Bundle deployment details, see databricks-deploy-integration.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversPica is automated workflow software for business process automation, integrating actions across services via a unified i
Boost your AI code assistant with Context7: inject real-time API documentation from OpenAPI specification sources into y
Boost productivity with Task Master: an AI-powered tool for project management and agile development workflows, integrat
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Integrate FireCrawl for advanced web scraping to extract clean, structured data from complex websites—fast, scalable, an
Genkit — consume MCP resources or expose powerful Genkit tools as a server for streamlined development and integration.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.