databricks-hello-world

0
0
Source

Create a minimal working Databricks example with cluster and notebook. Use when starting a new Databricks project, testing your setup, or learning basic Databricks patterns. Trigger with phrases like "databricks hello world", "databricks example", "databricks quick start", "first databricks notebook", "create cluster".

Install

mkdir -p .claude/skills/databricks-hello-world && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7735" && unzip -o skill.zip -d .claude/skills/databricks-hello-world && rm skill.zip

Installs to .claude/skills/databricks-hello-world

About this skill

Databricks Hello World

Overview

Create your first Databricks cluster and notebook via the REST API and Python SDK. Covers single-node dev clusters, SQL warehouses, notebook upload, one-time job runs, and Delta Lake smoke tests.

Prerequisites

  • Completed databricks-install-auth setup
  • Workspace access with cluster creation permissions
  • Valid API credentials in env vars or ~/.databrickscfg

Instructions

Step 1: Create a Single-Node Dev Cluster

# POST /api/2.0/clusters/create
databricks clusters create --json '{
  "cluster_name": "hello-world-dev",
  "spark_version": "14.3.x-scala2.12",
  "node_type_id": "i3.xlarge",
  "autotermination_minutes": 30,
  "num_workers": 0,
  "spark_conf": {
    "spark.databricks.cluster.profile": "singleNode",
    "spark.master": "local[*]"
  },
  "custom_tags": {
    "ResourceClass": "SingleNode"
  }
}'
# Returns: {"cluster_id": "0123-456789-abcde123"}

Or via Python SDK:

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.compute import AutoScale

w = WorkspaceClient()

# create_and_wait blocks until cluster reaches RUNNING state
cluster = w.clusters.create_and_wait(
    cluster_name="hello-world-dev",
    spark_version="14.3.x-scala2.12",
    node_type_id="i3.xlarge",
    num_workers=0,
    autotermination_minutes=30,
    spark_conf={
        "spark.databricks.cluster.profile": "singleNode",
        "spark.master": "local[*]",
    },
)
print(f"Cluster ready: {cluster.cluster_id} ({cluster.state})")

Step 2: Create and Upload a Notebook

import base64
from databricks.sdk import WorkspaceClient
from databricks.sdk.service.workspace import ImportFormat, Language

w = WorkspaceClient()

notebook_source = """
# Databricks notebook source
# COMMAND ----------

# Simple DataFrame
data = [("Alice", 28), ("Bob", 35), ("Charlie", 42)]
df = spark.createDataFrame(data, ["name", "age"])
display(df)

# COMMAND ----------

# Write as Delta table
df.write.format("delta").mode("overwrite").saveAsTable("default.hello_world")

# COMMAND ----------

# Read it back and verify
result = spark.table("default.hello_world")
display(result)
assert result.count() == 3, "Expected 3 rows"
print("Hello from Databricks!")
"""

me = w.current_user.me()
notebook_path = f"/Users/{me.user_name}/hello_world"

w.workspace.import_(
    path=notebook_path,
    format=ImportFormat.SOURCE,
    language=Language.PYTHON,
    content=base64.b64encode(notebook_source.encode()).decode(),
    overwrite=True,
)
print(f"Notebook created at: {notebook_path}")

Step 3: Run the Notebook as a One-Time Job

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.jobs import SubmitTask, NotebookTask

w = WorkspaceClient()

# POST /api/2.1/jobs/runs/submit — no persistent job definition needed
run = w.jobs.submit(
    run_name="hello-world-run",
    tasks=[
        SubmitTask(
            task_key="hello",
            existing_cluster_id="0123-456789-abcde123",  # from Step 1
            notebook_task=NotebookTask(
                notebook_path=f"/Users/{w.current_user.me().user_name}/hello_world",
            ),
        )
    ],
).result()  # .result() blocks until run completes

print(f"Run {run.run_id}: {run.state.result_state}")
# Expect: "Run 12345: SUCCESS"

Step 4: Create a Serverless SQL Warehouse

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

# Serverless warehouses start in seconds and cost ~$0.07/DBU
warehouse = w.warehouses.create_and_wait(
    name="hello-warehouse",
    cluster_size="2X-Small",
    auto_stop_mins=10,
    warehouse_type="PRO",
    enable_serverless_compute=True,
)
print(f"Warehouse ready: {warehouse.id}")

# Run SQL against it
result = w.statement_execution.execute_statement(
    warehouse_id=warehouse.id,
    statement="SELECT current_timestamp() AS now, current_user() AS who",
)
print(result.result.data_array)

Step 5: Verify Everything via CLI

# List clusters
databricks clusters list --output json | jq '.[] | {id: .cluster_id, name: .cluster_name, state: .state}'

# List workspace contents
databricks workspace list /Users/$(databricks current-user me --output json | jq -r .userName)/

# Get run results
databricks runs list --limit 5 --output json | jq '.runs[] | {run_id: .run_id, name: .run_name, state: .state.result_state}'

# Clean up — terminate the dev cluster (saves money)
databricks clusters delete --cluster-id 0123-456789-abcde123

Output

  • Single-node development cluster created and running
  • Hello world notebook uploaded to workspace
  • Successful notebook execution via runs/submit API
  • Serverless SQL warehouse operational
  • Delta table default.hello_world created

Error Handling

ErrorCauseSolution
QUOTA_EXCEEDEDWorkspace cluster limit reachedTerminate unused clusters or request quota increase
INVALID_PARAMETER_VALUE: Invalid node typeInstance type unavailable in regionRun databricks clusters list-node-types for valid types
RESOURCE_ALREADY_EXISTSNotebook path occupiedPass overwrite=True to workspace.import_()
INVALID_STATE: Cluster is not runningCluster still starting or terminatedCall w.clusters.ensure_cluster_is_running(cluster_id)
PERMISSION_DENIEDMissing cluster create entitlementAdmin must grant "Allow cluster creation" in workspace settings

Examples

Quick Node Type Discovery

w = WorkspaceClient()
# Find cheapest general-purpose instance types
node_types = w.clusters.list_node_types()
for nt in sorted(node_types.node_types, key=lambda x: x.memory_mb)[:5]:
    print(f"{nt.node_type_id}: {nt.memory_mb}MB RAM, {nt.num_cores} cores")

List Available Spark Versions

w = WorkspaceClient()
for v in w.clusters.spark_versions().versions:
    if "LTS" in v.name:
        print(f"{v.key}: {v.name}")

Resources

Next Steps

Proceed to databricks-local-dev-loop for local development setup.

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

7824

automating-mobile-app-testing

jeremylongshore

This skill enables automated testing of mobile applications on iOS and Android platforms using frameworks like Appium, Detox, XCUITest, and Espresso. It generates end-to-end tests, sets up page object models, and handles platform-specific elements. Use this skill when the user requests mobile app testing, test automation for iOS or Android, or needs assistance with setting up device farms and simulators. The skill is triggered by terms like "mobile testing", "appium", "detox", "xcuitest", "espresso", "android test", "ios test".

13615

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

3114

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

4311

performing-security-audits

jeremylongshore

This skill allows Claude to conduct comprehensive security audits of code, infrastructure, and configurations. It leverages various tools within the security-pro-pack plugin, including vulnerability scanning, compliance checking, cryptography review, and infrastructure security analysis. Use this skill when a user requests a "security audit," "vulnerability assessment," "compliance review," or any task involving identifying and mitigating security risks. It helps to ensure code and systems adhere to security best practices and compliance standards.

109

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

1128

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

9521,094

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

846846

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

571699

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

548492

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

673466

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

514280

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.