databricks-migration-deep-dive

2
0
Source

Execute comprehensive platform migrations to Databricks from legacy systems. Use when migrating from on-premises Hadoop, other cloud platforms, or legacy data warehouses to Databricks. Trigger with phrases like "migrate to databricks", "hadoop migration", "snowflake to databricks", "legacy migration", "data warehouse migration".

Install

mkdir -p .claude/skills/databricks-migration-deep-dive && curl -L -o skill.zip "https://mcp.directory/api/skills/download/3522" && unzip -o skill.zip -d .claude/skills/databricks-migration-deep-dive && rm skill.zip

Installs to .claude/skills/databricks-migration-deep-dive

About this skill

Databricks Migration Deep Dive

Overview

Comprehensive migration strategies for moving to Databricks from Hadoop, Snowflake, Redshift, Synapse, or legacy data warehouses. Covers discovery and assessment, schema conversion, data migration with batching and validation, ETL/pipeline conversion, and cutover planning with rollback procedures.

Prerequisites

  • Access to source and target systems
  • Databricks workspace with Unity Catalog enabled
  • Understanding of current data architecture and dependencies
  • Stakeholder alignment on migration timeline

Migration Patterns

SourcePatternComplexityTimeline
Hive Metastore (same workspace)SYNC / CTAS / DEEP CLONELowDays
On-prem Hadoop/HDFSLift-and-shift to cloud storage + UCHigh6-12 months
SnowflakeParallel run + cutoverMedium3-6 months
AWS RedshiftUnload to S3 + Auto LoaderMedium3-6 months
Legacy DW (Oracle/Teradata)Full rebuild with JDBC extractionHigh12-18 months

Instructions

Step 1: Discovery and Assessment

Inventory all source tables with metadata for migration planning.

from pyspark.sql import SparkSession
from dataclasses import dataclass

spark = SparkSession.builder.getOrCreate()

@dataclass
class TableInventory:
    database: str
    table: str
    table_type: str
    format: str
    row_count: int
    size_mb: float
    columns: int
    partitions: list[str]

def assess_hive_metastore() -> list[TableInventory]:
    """Inventory all Hive Metastore tables for migration planning."""
    inventory = []
    databases = [r.databaseName for r in spark.sql("SHOW DATABASES").collect()]

    for db in databases:
        tables = spark.sql(f"SHOW TABLES IN hive_metastore.{db}").collect()
        for t in tables:
            table_name = f"hive_metastore.{db}.{t.tableName}"
            try:
                detail = spark.sql(f"DESCRIBE DETAIL {table_name}").first()
                schema = spark.table(table_name).schema

                inventory.append(TableInventory(
                    database=db,
                    table=t.tableName,
                    table_type=detail.format or "unknown",
                    format=detail.format or "unknown",
                    row_count=spark.table(table_name).count(),
                    size_mb=detail.sizeInBytes / 1048576 if detail.sizeInBytes else 0,
                    columns=len(schema),
                    partitions=detail.partitionColumns or [],
                ))
            except Exception as e:
                print(f"  Skipping {table_name}: {e}")

    return inventory

# Generate migration plan
tables = assess_hive_metastore()
tables.sort(key=lambda t: t.size_mb, reverse=True)

print(f"\nTotal tables: {len(tables)}")
print(f"Total size: {sum(t.size_mb for t in tables):.0f} MB")
print(f"\nTop 10 by size:")
for t in tables[:10]:
    print(f"  {t.database}.{t.table}: {t.size_mb:.0f}MB, {t.row_count:,} rows, {t.format}")

Step 2: Schema Migration

# Schema conversion for common type mismatches
TYPE_MAP = {
    # Hadoop/Hive types → Delta Lake/Spark types
    "CHAR": "STRING",
    "VARCHAR": "STRING",
    "TINYINT": "INT",
    "SMALLINT": "INT",
    "BINARY": "BINARY",
    # Snowflake types
    "NUMBER": "DECIMAL",
    "VARIANT": "STRING",  # Store as JSON string, parse in Silver
    "TIMESTAMP_NTZ": "TIMESTAMP",
    "TIMESTAMP_TZ": "TIMESTAMP",
    # Redshift types
    "SUPER": "STRING",
    "TIMETZ": "TIMESTAMP",
}

def generate_create_table(source_table: str, target_table: str) -> str:
    """Generate CREATE TABLE DDL with type conversions."""
    schema = spark.table(source_table).schema
    cols = []
    for field in schema:
        dtype = TYPE_MAP.get(str(field.dataType).upper(), str(field.dataType))
        cols.append(f"  {field.name} {dtype}")

    return f"""CREATE TABLE IF NOT EXISTS {target_table} (
{',\n'.join(cols)}
) USING DELTA
TBLPROPERTIES (
    'delta.autoOptimize.optimizeWrite' = 'true',
    'delta.autoOptimize.autoCompact' = 'true'
);"""

Step 3: Data Migration with Validation

def migrate_table(
    source_table: str,
    target_table: str,
    method: str = "ctas",
    batch_size_mb: int = 500,
) -> dict:
    """Migrate a table with validation."""
    result = {"source": source_table, "target": target_table, "method": method}

    if method == "sync":
        # In-place metadata migration (fastest, no data copy)
        spark.sql(f"SYNC TABLE {target_table} FROM {source_table}")

    elif method == "deep_clone":
        # Delta-to-Delta with history preservation
        spark.sql(f"CREATE TABLE {target_table} DEEP CLONE {source_table}")

    elif method == "ctas":
        # Full data copy (works with any source format)
        source_size_mb = spark.sql(
            f"DESCRIBE DETAIL {source_table}"
        ).first().sizeInBytes / 1048576

        if source_size_mb > batch_size_mb:
            # Batch large tables by partition or row number
            spark.sql(f"""
                CREATE TABLE {target_table}
                USING DELTA
                AS SELECT * FROM {source_table}
            """)
        else:
            spark.sql(f"CREATE TABLE {target_table} AS SELECT * FROM {source_table}")

    elif method == "jdbc":
        # External database migration
        df = (spark.read
            .format("jdbc")
            .option("url", f"jdbc:postgresql://host:5432/db")
            .option("dbtable", source_table)
            .option("fetchsize", "10000")
            .load())
        df.write.format("delta").saveAsTable(target_table)

    # Validate
    src_count = spark.table(source_table).count()
    tgt_count = spark.table(target_table).count()
    result["source_rows"] = src_count
    result["target_rows"] = tgt_count
    result["match"] = src_count == tgt_count
    result["status"] = "OK" if result["match"] else "MISMATCH"

    return result

# Migrate with validation
result = migrate_table(
    "hive_metastore.legacy.customers",
    "analytics.migrated.customers",
    method="ctas",
)
print(f"{result['source']} -> {result['target']}: "
      f"{result['source_rows']:,} rows [{result['status']}]")

Step 4: Snowflake / Redshift Migration

# Snowflake: Use Lakehouse Federation or Unload + Auto Loader
# Option A: Lakehouse Federation (query in place, no copy)
spark.sql("""
    CREATE FOREIGN CATALOG snowflake_catalog
    USING CONNECTION snowflake_conn
    OPTIONS (database 'PROD_DB')
""")
# Query directly: SELECT * FROM snowflake_catalog.schema.table

# Option B: Unload to S3 + ingest
# In Snowflake:
# COPY INTO @my_s3_stage/export/customers/
# FROM PROD_DB.PUBLIC.CUSTOMERS
# FILE_FORMAT = (TYPE = PARQUET);

# In Databricks:
df = spark.read.parquet("s3://migration-bucket/export/customers/")
df.write.format("delta").saveAsTable("analytics.migrated.customers")
# Redshift: Unload to S3 + Auto Loader
# In Redshift:
# UNLOAD ('SELECT * FROM prod.customers')
# TO 's3://migration-bucket/redshift/customers/'
# FORMAT PARQUET;

# In Databricks:
(spark.readStream
    .format("cloudFiles")
    .option("cloudFiles.format", "parquet")
    .option("cloudFiles.schemaLocation", "/checkpoints/migration/schema")
    .load("s3://migration-bucket/redshift/customers/")
    .writeStream
    .format("delta")
    .option("checkpointLocation", "/checkpoints/migration/data")
    .toTable("analytics.migrated.customers"))

Step 5: ETL Pipeline Conversion

# Convert Oozie/Airflow jobs to Databricks Asset Bundles
# Before (Oozie/spark-submit):
#   spark-submit --class com.company.ETL --master yarn app.jar
#   hive -e "INSERT OVERWRITE TABLE target SELECT * FROM staging"

# After (Asset Bundle):
# databricks.yml resources:
"""
resources:
  jobs:
    migrated_etl:
      name: migrated-etl
      tasks:
        - task_key: extract
          notebook_task:
            notebook_path: src/extract.py
        - task_key: transform
          depends_on: [{task_key: extract}]
          notebook_task:
            notebook_path: src/transform.py
"""

# Convert HiveQL to Spark SQL
# Before: INSERT OVERWRITE TABLE target SELECT ...
# After:  (Use MERGE for upserts or write.mode("overwrite").saveAsTable)

Step 6: Cutover Planning

cutover_steps = [
    {"step": 1, "action": "Final validation", "rollback": "No action needed"},
    {"step": 2, "action": "Disable source pipelines", "rollback": "Re-enable source"},
    {"step": 3, "action": "Final data sync", "rollback": "Data already in place"},
    {"step": 4, "action": "Switch apps to Databricks endpoints", "rollback": "Revert app config"},
    {"step": 5, "action": "Enable Databricks pipelines", "rollback": "Disable and restore source"},
    {"step": 6, "action": "Monitor for 24 hours", "rollback": "Full rollback if issues"},
]

# Validation query to run at each step
validation_query = """
SELECT 'source' AS system, COUNT(*) AS rows FROM source_table
UNION ALL
SELECT 'target', COUNT(*) FROM target_table
"""

Output

  • Migration assessment with table inventory (sizes, formats, dependencies)
  • Schema conversion with type mapping and DDL generation
  • Data migration with row-count validation per table
  • ETL pipeline conversion from Oozie/Airflow to Asset Bundles
  • Cutover plan with step-by-step rollback procedures

Error Handling

ErrorCauseSolution
Schema incompatibilityUnsupported types (VARIANT, SUPER)Convert to STRING, parse in Silver layer
Row count mismatchTruncation or filter during migrationCheck for NULLs, encoding issues, or WHERE clauses
JDBC timeoutLarge table extractionUse fetchsize, partition reads, or incremental export
SYNC failsExternal table storage inaccessibleVerify cloud storage credentials and network access
Pipeline dependency failureWrong migration orderBuild dependency graph, migrate leaf tables first

Exam


Content truncated.

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

6814

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

2412

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

379

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

978

performing-security-audits

jeremylongshore

This skill allows Claude to conduct comprehensive security audits of code, infrastructure, and configurations. It leverages various tools within the security-pro-pack plugin, including vulnerability scanning, compliance checking, cryptography review, and infrastructure security analysis. Use this skill when a user requests a "security audit," "vulnerability assessment," "compliance review," or any task involving identifying and mitigating security risks. It helps to ensure code and systems adhere to security best practices and compliance standards.

86

django-view-generator

jeremylongshore

Generate django view generator operations. Auto-activating skill for Backend Development. Triggers on: django view generator, django view generator Part of the Backend Development skill category. Use when working with django view generator functionality. Trigger with phrases like "django view generator", "django generator", "django".

15

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.