databricks-local-dev-loop

0
0
Source

Configure Databricks local development with dbx, Databricks Connect, and IDE. Use when setting up a local dev environment, configuring test workflows, or establishing a fast iteration cycle with Databricks. Trigger with phrases like "databricks dev setup", "databricks local", "databricks IDE", "develop with databricks", "databricks connect".

Install

mkdir -p .claude/skills/databricks-local-dev-loop && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6311" && unzip -o skill.zip -d .claude/skills/databricks-local-dev-loop && rm skill.zip

Installs to .claude/skills/databricks-local-dev-loop

About this skill

Databricks Local Dev Loop

Overview

Set up a fast local development workflow using Databricks Connect v2, Asset Bundles, and VS Code. Databricks Connect lets you run PySpark code locally while executing on a remote Databricks cluster, giving you IDE debugging, fast iteration, and proper test isolation.

Prerequisites

  • Completed databricks-install-auth setup
  • Python 3.10+ (must match cluster's Python version)
  • A running Databricks cluster (DBR 13.3 LTS+)
  • VS Code or PyCharm

Instructions

Step 1: Project Structure

my-databricks-project/
├── src/
│   ├── __init__.py
│   ├── pipelines/
│   │   ├── __init__.py
│   │   ├── bronze.py          # Raw ingestion
│   │   ├── silver.py          # Cleansing transforms
│   │   └── gold.py            # Business aggregations
│   └── utils/
│       ├── __init__.py
│       └── helpers.py
├── tests/
│   ├── conftest.py            # Spark fixtures
│   ├── unit/
│   │   └── test_transforms.py # Local Spark tests
│   └── integration/
│       └── test_pipeline.py   # Databricks Connect tests
├── notebooks/
│   └── exploration.py
├── resources/
│   └── daily_etl.yml          # Job resource definitions
├── databricks.yml             # Asset Bundle root config
├── pyproject.toml
└── requirements.txt

Step 2: Install Development Tools

set -euo pipefail

# Create virtual environment
python -m venv .venv && source .venv/bin/activate

# Databricks Connect v2 — version MUST match cluster DBR
pip install "databricks-connect==14.3.*"

# SDK and CLI
pip install databricks-sdk

# Testing
pip install pytest pytest-cov

# Verify Connect installation
databricks-connect test

Step 3: Configure Databricks Connect

Databricks Connect v2 reads from standard SDK auth (env vars, ~/.databrickscfg, or DATABRICKS_CLUSTER_ID).

# Set cluster for Connect to use
export DATABRICKS_HOST="https://adb-1234567890123456.7.azuredatabricks.net"
export DATABRICKS_TOKEN="dapi..."
export DATABRICKS_CLUSTER_ID="0123-456789-abcde123"
# src/utils/spark_session.py
from databricks.connect import DatabricksSession

def get_spark():
    """Get a DatabricksSession — runs Spark on the remote cluster."""
    return DatabricksSession.builder.getOrCreate()

# Usage: df operations execute on the remote cluster
spark = get_spark()
df = spark.sql("SELECT current_timestamp() AS now")
df.show()  # Results streamed back locally

Step 4: Asset Bundle Configuration

# databricks.yml
bundle:
  name: my-databricks-project

workspace:
  host: ${DATABRICKS_HOST}

include:
  - resources/*.yml

variables:
  catalog:
    description: Unity Catalog name
    default: dev_catalog

targets:
  dev:
    default: true
    mode: development
    workspace:
      root_path: /Users/${workspace.current_user.userName}/.bundle/${bundle.name}/dev

  staging:
    workspace:
      root_path: /Shared/.bundle/${bundle.name}/staging
    variables:
      catalog: staging_catalog

  prod:
    mode: production
    workspace:
      root_path: /Shared/.bundle/${bundle.name}/prod
    variables:
      catalog: prod_catalog
# resources/daily_etl.yml
resources:
  jobs:
    daily_etl:
      name: "daily-etl-${bundle.target}"
      tasks:
        - task_key: bronze
          notebook_task:
            notebook_path: src/pipelines/bronze.py
          new_cluster:
            spark_version: "14.3.x-scala2.12"
            node_type_id: "i3.xlarge"
            num_workers: 2

Step 5: Test Setup

# tests/conftest.py
import pytest
from pyspark.sql import SparkSession

@pytest.fixture(scope="session")
def local_spark():
    """Local SparkSession for fast unit tests (no cluster needed)."""
    return (
        SparkSession.builder
        .master("local[*]")
        .appName("unit-tests")
        .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
        .config("spark.sql.catalog.spark_catalog",
                "org.apache.spark.sql.delta.catalog.DeltaCatalog")
        .getOrCreate()
    )

@pytest.fixture(scope="session")
def remote_spark():
    """DatabricksSession for integration tests (requires running cluster)."""
    from databricks.connect import DatabricksSession
    return DatabricksSession.builder.getOrCreate()
# tests/unit/test_transforms.py
def test_dedup_by_primary_key(local_spark):
    from src.pipelines.silver import dedup_by_key

    data = [("a", 1), ("a", 2), ("b", 3)]
    df = local_spark.createDataFrame(data, ["id", "value"])

    result = dedup_by_key(df, key_col="id", order_col="value")
    assert result.count() == 2
    # Keeps latest value per key
    assert result.filter("id = 'a'").first()["value"] == 2

Step 6: Dev Workflow Commands

# Validate bundle configuration
databricks bundle validate

# Deploy dev resources to workspace
databricks bundle deploy -t dev

# Run a job
databricks bundle run daily_etl -t dev

# Sync local files to workspace (live reload)
databricks bundle sync -t dev --watch

# Run local unit tests (fast, no cluster)
pytest tests/unit/ -v

# Run integration tests (needs cluster)
pytest tests/integration/ -v --tb=short

# Full test with coverage
pytest tests/ --cov=src --cov-report=html

Step 7: VS Code Configuration

// .vscode/settings.json
{
  "python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
  "python.testing.pytestEnabled": true,
  "python.testing.pytestArgs": ["tests"],
  "python.envFile": "${workspaceFolder}/.env",
  "[python]": {
    "editor.defaultFormatter": "ms-python.black-formatter"
  }
}

Output

  • Local Python environment with Databricks Connect
  • Unit tests running with local Spark (no cluster required)
  • Integration tests running against remote cluster
  • Asset Bundle configured for dev/staging/prod deployment
  • VS Code debugging with breakpoints in PySpark code

Error Handling

ErrorCauseSolution
Cluster not runningAuto-terminatedSet DATABRICKS_CLUSTER_ID and start it: databricks clusters start --cluster-id ...
Version mismatchdatabricks-connect version differs from cluster DBRInstall matching version: pip install "databricks-connect==14.3.*" for DBR 14.3
SPARK_CONNECT_GRPC errorgRPC connection blockedCheck firewall allows outbound to workspace on port 443
ModuleNotFoundErrorMissing local package installRun pip install -e . for editable install
Multiple SparkSessionsConflicting Spark instancesAlways use getOrCreate() pattern

Examples

Interactive Development Script

# src/pipelines/bronze.py
from pyspark.sql import SparkSession, DataFrame
from pyspark.sql.functions import current_timestamp, input_file_name

def ingest_raw(spark: SparkSession, source_path: str, target_table: str) -> DataFrame:
    """Bronze ingestion with metadata columns."""
    return (
        spark.read.format("json").load(source_path)
        .withColumn("_ingested_at", current_timestamp())
        .withColumn("_source_file", input_file_name())
    )

if __name__ == "__main__":
    # Works locally via Databricks Connect
    from databricks.connect import DatabricksSession
    spark = DatabricksSession.builder.getOrCreate()
    df = ingest_raw(spark, "/mnt/raw/events/", "dev_catalog.bronze.events")
    df.show(5)

Resources

Next Steps

See databricks-sdk-patterns for production-ready code patterns.

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

6814

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

2412

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

379

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

978

performing-security-audits

jeremylongshore

This skill allows Claude to conduct comprehensive security audits of code, infrastructure, and configurations. It leverages various tools within the security-pro-pack plugin, including vulnerability scanning, compliance checking, cryptography review, and infrastructure security analysis. Use this skill when a user requests a "security audit," "vulnerability assessment," "compliance review," or any task involving identifying and mitigating security risks. It helps to ensure code and systems adhere to security best practices and compliance standards.

86

django-view-generator

jeremylongshore

Generate django view generator operations. Auto-activating skill for Backend Development. Triggers on: django view generator, django view generator Part of the Backend Development skill category. Use when working with django view generator functionality. Trigger with phrases like "django view generator", "django generator", "django".

15

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.