supabase-migration-deep-dive
Execute Supabase major re-architecture and migration strategies with strangler fig pattern. Use when migrating to or from Supabase, performing major version upgrades, or re-platforming existing integrations to Supabase. Trigger with phrases like "migrate supabase", "supabase migration", "switch to supabase", "supabase replatform", "supabase upgrade major".
Install
mkdir -p .claude/skills/supabase-migration-deep-dive && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5494" && unzip -o skill.zip -d .claude/skills/supabase-migration-deep-dive && rm skill.zipInstalls to .claude/skills/supabase-migration-deep-dive
About this skill
Supabase Migration Deep Dive
Overview
Supabase migrations are SQL files managed by the CLI that track schema changes across environments. This skill covers the complete migration lifecycle: creating migrations with npx supabase migration new, writing zero-downtime schema changes that avoid table locks, backfilling data in batches, managing schema versioning across environments, planning rollback strategies, and regenerating TypeScript types after schema changes. Every pattern uses real Supabase CLI commands and createClient from @supabase/supabase-js.
When to use: Creating new database migrations, modifying production schemas without downtime, backfilling existing data after adding columns, managing migration history across dev/staging/production, rolling back failed migrations, or regenerating TypeScript types.
Prerequisites
- Supabase CLI installed:
npm install -g supabaseornpx supabase --version @supabase/supabase-jsv2+ installed in your project- Local Supabase running:
npx supabase start - Understanding of PostgreSQL DDL and transaction behavior
Instructions
Step 1: Create and Manage Migrations
Use the Supabase CLI to create, test, and apply migrations. Each migration is a timestamped SQL file that runs in order.
Create a new migration:
# Create a migration file with a descriptive name
npx supabase migration new add_profiles_table
# Creates: supabase/migrations/20260322120000_add_profiles_table.sql
# List all migrations and their status
npx supabase migration list
# Check which migrations have been applied locally
npx supabase db reset --dry-run
Write the migration SQL:
-- supabase/migrations/20260322120000_add_profiles_table.sql
-- Create the profiles table
CREATE TABLE public.profiles (
id uuid REFERENCES auth.users(id) ON DELETE CASCADE PRIMARY KEY,
email text UNIQUE NOT NULL,
full_name text,
avatar_url text,
bio text,
created_at timestamptz DEFAULT now(),
updated_at timestamptz DEFAULT now()
);
-- Enable RLS
ALTER TABLE public.profiles ENABLE ROW LEVEL SECURITY;
-- Create policies
CREATE POLICY "users_read_own_profile" ON public.profiles
FOR SELECT USING (auth.uid() = id);
CREATE POLICY "users_update_own_profile" ON public.profiles
FOR UPDATE USING (auth.uid() = id)
WITH CHECK (auth.uid() = id);
-- Create an index for email lookups
CREATE INDEX idx_profiles_email ON public.profiles(email);
-- Auto-create profile on user signup (trigger)
CREATE OR REPLACE FUNCTION public.handle_new_user()
RETURNS trigger AS $$
BEGIN
INSERT INTO public.profiles (id, email, full_name, avatar_url)
VALUES (
new.id,
new.email,
new.raw_user_meta_data ->> 'full_name',
new.raw_user_meta_data ->> 'avatar_url'
);
RETURN new;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
CREATE TRIGGER on_auth_user_created
AFTER INSERT ON auth.users
FOR EACH ROW EXECUTE FUNCTION public.handle_new_user();
-- Updated_at trigger
CREATE OR REPLACE FUNCTION public.update_updated_at()
RETURNS trigger AS $$
BEGIN
new.updated_at = now();
RETURN new;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER set_updated_at
BEFORE UPDATE ON public.profiles
FOR EACH ROW EXECUTE FUNCTION public.update_updated_at();
Test the migration locally:
# Apply all migrations and seed data (destructive — resets local DB)
npx supabase db reset
# Run pgTAP tests if configured
npx supabase test db
# Verify the schema
npx supabase db lint
# Generate updated TypeScript types
npx supabase gen types typescript --local > lib/database.types.ts
Apply migrations to remote environments:
# Push to staging
npx supabase link --project-ref <staging-ref>
npx supabase db push
# Verify: npx supabase migration list --linked
# Push to production (same migration files)
npx supabase link --project-ref <prod-ref>
npx supabase db push
Step 2: Zero-Downtime Migration Patterns
Production schema changes must avoid locking tables. These patterns ensure migrations complete without blocking reads or writes.
Add a column (safe — no lock):
-- supabase/migrations/20260323000000_add_status_column.sql
-- Adding a nullable column with a default does NOT lock the table in Postgres 11+
ALTER TABLE public.orders ADD COLUMN status text DEFAULT 'pending';
-- Create an index CONCURRENTLY (does not block writes)
-- NOTE: CONCURRENTLY cannot run inside a transaction block
-- Supabase migrations run each file in a transaction, so use a separate migration
-- supabase/migrations/20260323000001_add_status_index.sql
-- This migration must run outside a transaction for CONCURRENTLY
-- Add this comment at the top of the file:
-- supabase:disable-transaction
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_orders_status
ON public.orders(status);
Rename a column (two-phase approach):
-- Phase 1: Add new column, backfill, update application code
-- supabase/migrations/20260324000000_add_display_name.sql
-- Add the new column
ALTER TABLE public.profiles ADD COLUMN display_name text;
-- Copy data from old column
UPDATE public.profiles SET display_name = full_name WHERE display_name IS NULL;
-- Create a trigger to keep both columns in sync during transition
CREATE OR REPLACE FUNCTION sync_name_columns()
RETURNS trigger AS $$
BEGIN
IF TG_OP = 'INSERT' OR NEW.full_name IS DISTINCT FROM OLD.full_name THEN
NEW.display_name = NEW.full_name;
END IF;
IF TG_OP = 'INSERT' OR NEW.display_name IS DISTINCT FROM OLD.display_name THEN
NEW.full_name = NEW.display_name;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER sync_names
BEFORE INSERT OR UPDATE ON public.profiles
FOR EACH ROW EXECUTE FUNCTION sync_name_columns();
-- Phase 2: After all application code uses display_name (deploy + verify)
-- supabase/migrations/20260325000000_drop_full_name.sql
-- Remove the sync trigger
DROP TRIGGER IF EXISTS sync_names ON public.profiles;
DROP FUNCTION IF EXISTS sync_name_columns();
-- Drop the old column
ALTER TABLE public.profiles DROP COLUMN full_name;
Change column type (safe approach):
-- supabase/migrations/20260326000000_change_price_to_numeric.sql
-- DON'T DO THIS (locks table for the entire rewrite):
-- ALTER TABLE orders ALTER COLUMN price TYPE numeric(10,2);
-- SAFE: Add new column, backfill, swap
ALTER TABLE public.orders ADD COLUMN price_numeric numeric(10,2);
-- Backfill in a separate migration or via application code
UPDATE public.orders SET price_numeric = price::numeric(10,2)
WHERE price_numeric IS NULL;
-- After verifying all data is backfilled:
-- ALTER TABLE public.orders DROP COLUMN price;
-- ALTER TABLE public.orders RENAME COLUMN price_numeric TO price;
Verify zero-downtime from the SDK:
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!,
{ auth: { autoRefreshToken: false, persistSession: false } }
);
// Run during migration to verify no downtime
async function migrationHealthCheck(tableName: string) {
const checks = [];
for (let i = 0; i < 10; i++) {
const start = performance.now();
const { error } = await supabase
.from(tableName)
.select('id')
.limit(1);
checks.push({
attempt: i + 1,
latencyMs: Math.round(performance.now() - start),
success: !error,
error: error?.message,
});
await new Promise((r) => setTimeout(r, 1000));
}
const failures = checks.filter((c) => !c.success);
console.log(`Health check: ${checks.length - failures.length}/${checks.length} passed`);
if (failures.length > 0) {
console.warn('Failures:', failures);
}
}
Step 3: Data Backfill, Versioning, and Rollback
Backfill data in batches to avoid overwhelming the database, track schema versions, and plan rollback strategies for failed migrations.
Batch data backfill from the SDK:
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!,
{ auth: { autoRefreshToken: false, persistSession: false } }
);
// Backfill a new column in batches (avoids long-running transactions)
async function backfillColumn(
table: string,
column: string,
computeValue: (row: any) => any,
batchSize = 500
) {
let processed = 0;
let lastId: string | null = null;
while (true) {
// Fetch a batch of rows that need backfilling
let query = supabase
.from(table)
.select('*')
.is(column, null)
.order('id', { ascending: true })
.limit(batchSize);
if (lastId) {
query = query.gt('id', lastId);
}
const { data: rows, error } = await query;
if (error) throw error;
if (!rows || rows.length === 0) break;
// Update each row with the computed value
for (const row of rows) {
const newValue = computeValue(row);
const { error: updateError } = await supabase
.from(table)
.update({ [column]: newValue })
.eq('id', row.id);
if (updateError) {
console.error(`Failed to update ${row.id}:`, updateError.message);
}
}
lastId = rows[rows.length - 1].id;
processed += rows.length;
console.log(`Backfilled ${processed} rows...`);
// Brief pause to avoid overwhelming the database
await new Promise((r) => setTimeout(r, 100));
}
console.log(`Backfill complete: ${processed} rows updated`);
}
// Example: backfill a slug column from name
await backfillColumn('projects', 'slug', (row) =>
row.name.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-|-$/g, '')
);
Schema versioning — track what's deployed where:
# Check migration status on each environment
npx supabase link --project-ref <staging-ref>
npx supabase migration list
# Compare local migrations with remote
npx supabase db diff -
---
*Content truncated.*
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversConnect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Terminal control, file system search, and diff-based file editing for Claude and other AI assistants. Execute shell comm
Official Laravel-focused MCP server for augmenting AI-powered local development. Provides deep context about your Larave
Connect Supabase projects to AI with Supabase MCP Server. Standardize LLM communication for secure, efficient developmen
Securely join MySQL databases with Read MySQL for read-only query access and in-depth data analysis.
AppleScript MCP server lets AI execute apple script on macOS, accessing Notes, Calendar, Contacts, Messages & Finder via
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.