supabase-load-scale
Implement Supabase load testing, auto-scaling, and capacity planning strategies. Use when running performance tests, configuring horizontal scaling, or planning capacity for Supabase integrations. Trigger with phrases like "supabase load test", "supabase scale", "supabase performance test", "supabase capacity", "supabase k6", "supabase benchmark".
Install
mkdir -p .claude/skills/supabase-load-scale && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4220" && unzip -o skill.zip -d .claude/skills/supabase-load-scale && rm skill.zipInstalls to .claude/skills/supabase-load-scale
About this skill
Supabase Load & Scale
Overview
Supabase scaling operates at six layers: read replicas (offload analytics and reporting queries), connection pooling (Supavisor pgBouncer replacement with transaction/session modes), compute upgrades (vCPU/RAM tiers), CDN for Storage (cache public bucket assets at the edge), Edge Function regions (deploy functions closer to users), and table partitioning (split billion-row tables for query performance). This skill covers each layer with real createClient configuration, SQL, and CLI commands.
Prerequisites
- Supabase project on a Pro plan or higher (read replicas require Pro+)
@supabase/supabase-jsv2+ installedsupabaseCLI installed and linked to your project- Database access via
psqlor Supabase SQL Editor - TypeScript project with generated database types
Step 1 — Read Replicas and Connection Pooling
Read replicas let you route read-heavy queries (dashboards, reports, search) to replica databases while keeping writes on the primary. Supabase uses Supavisor (their pgBouncer replacement) for connection pooling with two modes: transaction (default, shares connections between requests) and session (holds a connection per client session, needed for prepared statements).
Configure the Read Replica Client
// lib/supabase.ts
import { createClient } from '@supabase/supabase-js'
import type { Database } from './database.types'
// Primary client — handles all writes and real-time subscriptions
export const supabase = createClient<Database>(
process.env.SUPABASE_URL!,
process.env.SUPABASE_ANON_KEY!
)
// Read replica client — use for analytics, dashboards, search
// The read replica URL is available in Dashboard > Settings > Database
export const supabaseReadOnly = createClient<Database>(
process.env.SUPABASE_READ_REPLICA_URL!, // e.g., https://<project-ref>-ro.supabase.co
process.env.SUPABASE_ANON_KEY!, // same anon key works for replicas
{
db: { schema: 'public' },
// Replicas may have slight lag (typically <100ms)
// Do NOT use for reads-after-writes in the same request
}
)
// Server-side admin client with connection pooling via Supavisor
// Use the pooled connection string (port 6543) instead of direct (port 5432)
export const supabaseAdmin = createClient<Database>(
process.env.SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!,
{
auth: { autoRefreshToken: false, persistSession: false },
db: { schema: 'public' },
}
)
Direct Postgres Connections via Supavisor Pooling
# Transaction mode (default, port 6543) — best for serverless/short-lived connections
# Shares connections across clients. Cannot use prepared statements.
psql "postgresql://postgres.[project-ref]:[password]@aws-0-us-east-1.pooler.supabase.com:6543/postgres"
# Session mode (port 5432 via pooler) — needed for prepared statements, LISTEN/NOTIFY
# One-to-one connection mapping per client.
psql "postgresql://postgres.[project-ref]:[password]@aws-0-us-east-1.pooler.supabase.com:5432/postgres"
# Direct connection (no pooling) — for migrations and admin tasks only
psql "postgresql://postgres:[password]@db.[project-ref].supabase.co:5432/postgres"
Route Queries to the Right Target
// services/analytics.ts
import { supabaseReadOnly } from '../lib/supabase'
// Heavy analytics queries go to the read replica
export async function getDashboardMetrics(orgId: string) {
const { data, error } = await supabaseReadOnly
.from('events')
.select('event_type, count:id.count()')
.eq('org_id', orgId)
.gte('created_at', new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString())
if (error) throw new Error(`Dashboard query failed: ${error.message}`)
return data
}
// services/orders.ts
import { supabase } from '../lib/supabase'
// Writes always go to the primary
export async function createOrder(order: OrderInsert) {
const { data, error } = await supabase
.from('orders')
.insert(order)
.select('id, status, total, created_at')
.single()
if (error) throw new Error(`Order creation failed: ${error.message}`)
return data
}
Monitor Connection Pool Usage
-- Check active connections by source (run in SQL Editor)
SELECT
usename,
application_name,
client_addr,
state,
count(*) AS connections
FROM pg_stat_activity
WHERE datname = 'postgres'
GROUP BY usename, application_name, client_addr, state
ORDER BY connections DESC;
-- Check connection limits for your compute tier
SHOW max_connections;
-- Micro: 60, Small: 90, Medium: 120, Large: 160, XL: 240, 2XL: 380, 4XL: 480
Step 2 — Compute Upgrades, CDN for Storage, and Edge Function Regions
Compute Size Selection Guide
| Tier | vCPU | RAM | Max Connections | Best For |
|---|---|---|---|---|
| Micro (Free) | 2 shared | 1 GB | 60 | Development, prototypes |
| Small (Pro) | 2 dedicated | 2 GB | 90 | Low-traffic production |
| Medium | 2 dedicated | 4 GB | 120 | Growing apps, moderate traffic |
| Large | 4 dedicated | 8 GB | 160 | High-traffic, complex queries |
| XL | 8 dedicated | 16 GB | 240 | Large datasets, concurrent users |
| 2XL | 16 dedicated | 32 GB | 380 | Enterprise, heavy analytics |
| 4XL | 32 dedicated | 64 GB | 480 | Mission-critical, max throughput |
# Upgrade compute via CLI (requires Pro plan)
supabase projects update --experimental --compute-size small # or medium, large, xl, 2xl, 4xl
# Check current compute size
supabase projects list
CDN Caching for Storage Buckets
Public buckets are automatically served through Supabase's CDN. Optimize cache behavior with proper headers and transforms.
import { createClient } from '@supabase/supabase-js'
const supabase = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_ANON_KEY!
)
// Upload with cache-control headers for CDN optimization
async function uploadPublicAsset(
bucket: string,
path: string,
file: File
) {
const { data, error } = await supabase.storage
.from(bucket)
.upload(path, file, {
cacheControl: '31536000', // 1 year cache for immutable assets
upsert: false, // prevent accidental overwrites
contentType: file.type,
})
if (error) throw new Error(`Upload failed: ${error.message}`)
// Get the CDN-cached public URL
const { data: { publicUrl } } = supabase.storage
.from(bucket)
.getPublicUrl(path, {
transform: {
width: 800, // Image transforms are cached at the CDN edge
quality: 80,
format: 'webp',
},
})
return { path: data.path, publicUrl }
}
// Bust cache by uploading to a new path (content-addressed)
import { createHash } from 'crypto'
async function uploadVersionedAsset(bucket: string, file: Buffer, ext: string) {
const hash = createHash('sha256').update(file).digest('hex').slice(0, 12)
const path = `assets/${hash}.${ext}`
const { error } = await supabase.storage
.from(bucket)
.upload(path, file, {
cacheControl: '31536000',
upsert: false,
})
if (error && error.message !== 'The resource already exists') {
throw new Error(`Versioned upload failed: ${error.message}`)
}
return supabase.storage.from(bucket).getPublicUrl(path).data.publicUrl
}
Edge Function Regional Deployment
# Deploy to a specific region (closer to your users)
supabase functions deploy my-function --region us-east-1
supabase functions deploy my-function --region eu-west-1
supabase functions deploy my-function --region ap-southeast-1
# List available regions
supabase functions list
# Deploy all functions to a region
supabase functions deploy --region us-east-1
// supabase/functions/geo-router/index.ts
// Edge Function that runs in the region closest to the user
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'
Deno.serve(async (req) => {
const supabase = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
)
// Edge Functions automatically run in the nearest region
// Use this for latency-sensitive operations
const { data, error } = await supabase
.from('products')
.select('id, name, price')
.eq('region', req.headers.get('x-region') ?? 'us')
.limit(20)
if (error) {
return new Response(JSON.stringify({ error: error.message }), { status: 500 })
}
return new Response(JSON.stringify(data), {
headers: {
'Content-Type': 'application/json',
'Cache-Control': 'public, max-age=60', // CDN caches the response
},
})
})
Step 3 — Database Table Partitioning
For tables with millions/billions of rows, partitioning splits data into smaller physical chunks. Supabase supports PostgreSQL native partitioning (range, list, hash). Queries that include the partition key only scan relevant partitions.
Range Partitioning by Date (Most Common)
-- Create the partitioned parent table
CREATE TABLE public.events (
id bigint GENERATED ALWAYS AS IDENTITY,
org_id uuid NOT NULL REFERENCES public.organizations(id),
event_type text NOT NULL,
payload jsonb,
created_at timestamptz NOT NULL DEFAULT now(),
PRIMARY KEY (id, created_at) -- partition key must be in PK
) PARTITION BY RANGE (created_at);
-- Create monthly partitions
CREATE TABLE public.events_2025_01 PARTITION OF public.events
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE public.events_2025_02 PARTITION OF public.events
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
CREATE TABLE public.events_2025_03 PARTITION OF public.events
FOR VALUES FROM ('2025-03-01') TO ('2025-04-01');
-- ... create partitions for each month
-- Default partition catches anything that doesn't match
CREATE TABLE public.events_default PARTITION OF public.events DEFAULT;
-- Index each partition (PostgreSQL auto-creates on child tables from parent inde
---
*Content truncated.*
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversEnhance software testing with Playwright MCP: Fast, reliable browser automation, an innovative alternative to Selenium s
Automate API testing with Postman collections or OpenAPI specs. Generate test cases in TypeScript, JavaScript, and Pytho
Easily connect Claude and LLMs to Supabase databases and Supabase Edge Functions for secure, code-free CRUD and custom o
Connect Claude to Apifox for direct API docs access and testing via env-auth and TypeScript/Express integration.
Discover JNews, a lightweight Python FastAPI server using uv for dependencies and GitHub Actions for CI/CD. Ideal for Fa
Break down complex problems with Sequential Thinking, a structured tool and step by step math solver for dynamic, reflec
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.