supabase-advanced-troubleshooting
Execute apply Supabase advanced debugging techniques for hard-to-diagnose issues. Use when standard troubleshooting fails, investigating complex race conditions, or preparing evidence bundles for Supabase support escalation. Trigger with phrases like "supabase hard bug", "supabase mystery error", "supabase impossible to debug", "difficult supabase issue", "supabase deep debug".
Install
mkdir -p .claude/skills/supabase-advanced-troubleshooting && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7283" && unzip -o skill.zip -d .claude/skills/supabase-advanced-troubleshooting && rm skill.zipInstalls to .claude/skills/supabase-advanced-troubleshooting
About this skill
Supabase Advanced Troubleshooting
Overview
When basic debugging does not reveal the root cause, you need deep PostgreSQL diagnostics: pg_stat_statements to find the slowest queries by cumulative execution time, pg_locks to detect lock contention and deadlocks, pg_stat_activity to find connection leaks, RLS policy conflict analysis to diagnose silent data filtering, Edge Function cold start profiling, and Realtime channel drop investigation. This skill covers every advanced diagnostic technique with real SQL queries and createClient from @supabase/supabase-js.
When to use: Slow query investigation, lock contention causing timeouts, connection pool exhaustion from leaks, RLS policies that silently filter or conflict, Edge Functions with unpredictable latency, or Realtime subscriptions that disconnect intermittently.
Prerequisites
- Supabase project with
pg_stat_statementsextension enabled - Direct database access via SQL Editor or
psql @supabase/supabase-jsv2+ installed in your project- Supabase CLI for Edge Function logs
- Familiarity with PostgreSQL system catalogs
Instructions
Step 1: pg_stat_statements and Slow Query Analysis
Enable and query pg_stat_statements to find the most expensive queries by total execution time, calls, and rows processed.
Enable the extension and query slow queries:
-- Enable pg_stat_statements (run once)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Top 10 slowest queries by total execution time
SELECT
queryid,
calls,
round(total_exec_time::numeric, 2) AS total_ms,
round(mean_exec_time::numeric, 2) AS avg_ms,
round(max_exec_time::numeric, 2) AS max_ms,
rows AS total_rows,
round(100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0), 2) AS cache_hit_pct,
left(query, 150) AS query_preview
FROM pg_stat_statements
WHERE userid = (SELECT usesysid FROM pg_user WHERE usename = current_user)
ORDER BY total_exec_time DESC
LIMIT 10;
-- Top queries by frequency (most called)
SELECT
queryid,
calls,
round(mean_exec_time::numeric, 2) AS avg_ms,
rows / nullif(calls, 0) AS rows_per_call,
left(query, 150) AS query_preview
FROM pg_stat_statements
WHERE calls > 100
ORDER BY calls DESC
LIMIT 10;
-- Queries with poor cache hit ratio (reading from disk)
SELECT
queryid,
calls,
shared_blks_hit,
shared_blks_read,
round(100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0), 2) AS cache_hit_pct,
left(query, 150) AS query_preview
FROM pg_stat_statements
WHERE shared_blks_read > 100
ORDER BY shared_blks_read DESC
LIMIT 10;
-- Reset statistics after optimization (to measure improvement)
-- SELECT pg_stat_statements_reset();
EXPLAIN ANALYZE for specific slow queries:
-- Run EXPLAIN ANALYZE on the suspicious query
EXPLAIN (ANALYZE, BUFFERS, TIMING, FORMAT TEXT)
SELECT p.*, count(o.id) AS order_count
FROM profiles p
LEFT JOIN orders o ON o.user_id = p.id
WHERE p.created_at > now() - interval '30 days'
GROUP BY p.id
ORDER BY order_count DESC
LIMIT 50;
-- What to look for in the output:
-- 1. Seq Scan on large table → needs an index
-- 2. Nested Loop with high actual rows → consider Hash Join
-- 3. Sort with "Sort Method: external merge" → increase work_mem or add index
-- 4. Buffers read >> shared hit → data not cached, optimize query or increase shared_buffers
-- Create a targeted index based on EXPLAIN output
CREATE INDEX CONCURRENTLY idx_profiles_created_at
ON profiles(created_at DESC);
CREATE INDEX CONCURRENTLY idx_orders_user_id
ON orders(user_id);
Monitor query performance from the SDK:
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!,
{ auth: { autoRefreshToken: false, persistSession: false } }
);
// Wrapper that measures and logs query performance
async function timedQuery<T>(
label: string,
queryFn: () => Promise<{ data: T | null; error: any }>
): Promise<T | null> {
const start = performance.now();
const { data, error } = await queryFn();
const duration = Math.round(performance.now() - start);
if (duration > 500) {
console.warn(`[SLOW QUERY] ${label}: ${duration}ms`);
}
if (error) {
console.error(`[QUERY ERROR] ${label}:`, error.message);
return null;
}
return data;
}
// Usage
const profiles = await timedQuery('recent-profiles', () =>
supabase
.from('profiles')
.select('*, orders(count)')
.gte('created_at', new Date(Date.now() - 30 * 86400000).toISOString())
.order('created_at', { ascending: false })
.limit(50)
);
Step 2: Lock Debugging and Connection Leak Detection
Find blocked queries, detect lock contention, and identify connection leaks that exhaust the pool.
Lock contention detection:
-- Find blocked queries and what's blocking them
SELECT
blocked.pid AS blocked_pid,
blocked.usename AS blocked_user,
age(now(), blocked.query_start)::text AS blocked_duration,
left(blocked.query, 100) AS blocked_query,
blocking.pid AS blocking_pid,
blocking.usename AS blocking_user,
left(blocking.query, 100) AS blocking_query,
bl.mode AS lock_mode
FROM pg_stat_activity blocked
JOIN pg_locks bl ON bl.pid = blocked.pid AND NOT bl.granted
JOIN pg_locks kl ON kl.locktype = bl.locktype
AND kl.database IS NOT DISTINCT FROM bl.database
AND kl.relation IS NOT DISTINCT FROM bl.relation
AND kl.page IS NOT DISTINCT FROM bl.page
AND kl.tuple IS NOT DISTINCT FROM bl.tuple
AND kl.pid != bl.pid
AND kl.granted
JOIN pg_stat_activity blocking ON blocking.pid = kl.pid
WHERE blocked.state = 'active';
-- Check all locks on a specific table
SELECT
l.locktype, l.mode, l.granted, l.pid,
a.usename, a.state,
age(now(), a.query_start)::text AS duration,
left(a.query, 80) AS query
FROM pg_locks l
JOIN pg_stat_activity a ON a.pid = l.pid
WHERE l.relation = 'orders'::regclass
ORDER BY l.granted, a.query_start;
-- Detect potential deadlocks
SELECT
l1.pid AS pid1, l2.pid AS pid2,
l1.mode AS lock1, l2.mode AS lock2,
l1.relation::regclass AS table1,
l2.relation::regclass AS table2
FROM pg_locks l1
JOIN pg_locks l2 ON l1.pid != l2.pid
AND l1.relation = l2.relation
AND NOT l1.granted AND l2.granted
WHERE l1.locktype = 'relation';
Connection leak detection:
-- Connections that have been idle for too long (likely leaks)
SELECT
pid, usename, client_addr, state,
age(now(), state_change)::text AS idle_time,
age(now(), backend_start)::text AS connection_age,
left(query, 100) AS last_query
FROM pg_stat_activity
WHERE state = 'idle'
AND age(now(), state_change) > interval '5 minutes'
AND datname = current_database()
ORDER BY state_change;
-- Connections stuck in "idle in transaction" (the worst kind of leak)
SELECT
pid, usename, client_addr,
age(now(), xact_start)::text AS transaction_duration,
age(now(), state_change)::text AS idle_in_tx_time,
left(query, 100) AS last_query
FROM pg_stat_activity
WHERE state = 'idle in transaction'
ORDER BY xact_start;
-- Connection usage by application/user
SELECT
usename,
client_addr,
state,
count(*) AS connections
FROM pg_stat_activity
WHERE datname = current_database()
GROUP BY usename, client_addr, state
ORDER BY connections DESC;
-- Kill leaked connections (batch)
-- SELECT pg_terminate_backend(pid)
-- FROM pg_stat_activity
-- WHERE state = 'idle in transaction'
-- AND age(now(), state_change) > interval '10 minutes';
Connection pool monitoring from the SDK:
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!,
{ auth: { autoRefreshToken: false, persistSession: false } }
);
// Monitor connection pool health
async function checkConnectionPool() {
const { data, error } = await supabase.rpc('get_connection_health');
if (error) {
console.error('Connection health check failed:', error.message);
return;
}
const health = data as {
active: number;
idle: number;
idle_in_transaction: number;
total: number;
max_connections: number;
};
const utilization = (health.total / health.max_connections) * 100;
console.log('Connection pool:', {
...health,
utilization: `${utilization.toFixed(1)}%`,
});
if (health.idle_in_transaction > 0) {
console.warn(`WARNING: ${health.idle_in_transaction} idle-in-transaction connections (likely leaks)`);
}
if (utilization > 80) {
console.warn(`WARNING: Connection pool at ${utilization.toFixed(1)}% capacity`);
}
}
// Database function for the RPC call:
// CREATE OR REPLACE FUNCTION get_connection_health()
// RETURNS json AS $$
// SELECT json_build_object(
// 'active', (SELECT count(*) FROM pg_stat_activity WHERE state = 'active' AND datname = current_database()),
// 'idle', (SELECT count(*) FROM pg_stat_activity WHERE state = 'idle' AND datname = current_database()),
// 'idle_in_transaction', (SELECT count(*) FROM pg_stat_activity WHERE state = 'idle in transaction' AND datname = current_database()),
// 'total', (SELECT count(*) FROM pg_stat_activity WHERE datname = current_database()),
// 'max_connections', (SELECT setting::int FROM pg_settings WHERE name = 'max_connections')
// );
// $$ LANGUAGE sql SECURITY DEFINER;
Step 3: RLS Conflicts, Edge Function Cold Starts, and Realtime Drops
Diagnose RLS policy conflicts that cause unexpected access patterns, profile Edge Function cold starts, and investigate Realtime connection drops.
RLS policy conflict analysis:
-- List ALL policies on a table to find conflicts
SELECT
pol.polname AS policy_name,
CASE pol.polcmd
WHEN 'r' THEN 'SELECT'
WHEN 'a' THEN 'INSERT'
WHEN 'w' THEN 'UPDATE'
WHEN 'd' THEN 'DELETE'
WHEN '*' THEN 'ALL'
END AS command,
CASE pol.polpermissive
WHEN true THEN 'PERMISSIV
---
*Content truncated.*
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversConnect Supabase projects to AI with Supabase MCP Server. Standardize LLM communication for secure, efficient developmen
Frida enables dynamic instrumentation and debugging for mobile and desktop apps, offering advanced process management an
Recraft AI is an ai image generator for creating, editing, and upscaling raster or vector images with advanced artificia
Use CLI, a secure shell client for Windows, to safely execute commands via SSH secure shell with advanced security contr
Magic-API is an advanced API documentation platform for managing, debugging, and exploring your swagger API and openapi
Supercharge Android Studio workflows with AI-driven SVG conversion, live logcat, and advanced mobile dev tools for smart
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.