supabase-rate-limits

0
0
Source

Implement Supabase rate limiting, backoff, and idempotency patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for Supabase. Trigger with phrases like "supabase rate limit", "supabase throttling", "supabase 429", "supabase retry", "supabase backoff".

Install

mkdir -p .claude/skills/supabase-rate-limits && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5400" && unzip -o skill.zip -d .claude/skills/supabase-rate-limits && rm skill.zip

Installs to .claude/skills/supabase-rate-limits

About this skill

Supabase Rate Limits

Overview

Supabase enforces rate limits and quotas across every API surface — PostgREST, Auth, Storage, Realtime, and Edge Functions. Limits scale by plan tier. This skill covers the exact numbers per tier, connection pooling via Supavisor, retry/backoff patterns, pagination to reduce payload, and dashboard monitoring so you can stay within quotas and handle 429 errors gracefully.

Prerequisites

  • Active Supabase project (any tier)
  • @supabase/supabase-js v2+ installed
  • Project URL and anon/service-role key available
  • Node.js 18+ or equivalent runtime

Instructions

Step 1 — Understand Rate Limits by Tier and Surface

Every Supabase project has per-surface limits that differ by plan. Know these numbers before you architect:

API Request Limits

MetricFreeProEnterprise
Requests per minute (RPM)5005,000Unlimited (custom)
Requests per day (RPD)50,0001,000,000Unlimited (custom)

Auth Rate Limits

EndpointFreePro
Signup30/hour per IPHigher (configurable)
Sign-in (password)30/hour per IPHigher (configurable)
Magic link / OTP4/hour per userConfigurable
Token refresh360/hour360/hour

Auth limits are per-IP and per-user. Configure custom limits in Dashboard > Authentication > Rate Limits.

Storage Bandwidth

MetricFreePro
Storage size1 GB100 GB
Bandwidth2 GB/month250 GB/month
Max file size50 MB5 GB
Upload rateShared with API RPMShared with API RPM

Realtime Connections

MetricFreePro
Concurrent connections200500
Messages per second100500
Channel joinsShared with connection limitShared

Edge Functions

MetricFreePro
Invocations/month500,0002,000,000
Execution time150s wall / 50ms CPU150s wall / 2s CPU
Memory256 MB256 MB

Database Connections

ModeFreePro
Direct connections60100+
Pooled connections (Supavisor)2001,500+

Step 2 — Configure Connection Pooling with Supavisor

Supavisor is Supabase's built-in connection pooler (replaced PgBouncer). It supports two modes:

Transaction mode (port 6543) — recommended for serverless:

import { createClient } from '@supabase/supabase-js'

// Transaction mode: connections returned to pool after each transaction
// Best for: serverless functions, Edge Functions, high-concurrency apps
const supabase = createClient(
  'https://your-project.supabase.co',
  process.env.SUPABASE_ANON_KEY!,
  {
    db: {
      // Use the pooler connection string with port 6543
      // Format: postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres
    }
  }
)

// For direct Postgres connections (e.g., Prisma, Drizzle), add pgbouncer=true
// Connection string: postgresql://...@pooler.supabase.com:6543/postgres?pgbouncer=true

Session mode (port 5432) — for LISTEN/NOTIFY and prepared statements:

// Session mode: dedicated connection per client session
// Best for: long-lived connections, LISTEN/NOTIFY, prepared statements
// Connection string: postgresql://...@pooler.supabase.com:5432/postgres

When to use which mode:

Use caseModePort
Serverless / Edge FunctionsTransaction6543
Next.js API routesTransaction6543
Long-running workersSession5432
Realtime subscriptionsDirect (no pooler)5432
Prisma / Drizzle ORMTransaction + ?pgbouncer=true6543

Step 3 — Implement Retry, Pagination, and Monitoring

Retry with exponential backoff for 429 errors:

import { createClient, SupabaseClient } from '@supabase/supabase-js'

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_ANON_KEY!
)

interface RetryConfig {
  maxRetries: number
  baseDelayMs: number
  maxDelayMs: number
}

async function withRetry<T>(
  operation: () => Promise<{ data: T | null; error: any }>,
  config: RetryConfig = { maxRetries: 3, baseDelayMs: 500, maxDelayMs: 10_000 }
): Promise<T> {
  for (let attempt = 0; attempt <= config.maxRetries; attempt++) {
    const { data, error } = await operation()

    if (!error) return data as T

    const isRetryable =
      error.message?.includes('rate limit') ||
      error.message?.includes('too many requests') ||
      error.code === '429' ||
      error.code === 'PGRST000'  // connection pool exhausted

    if (!isRetryable || attempt === config.maxRetries) {
      throw new Error(`Supabase error after ${attempt + 1} attempts: ${error.message}`)
    }

    // Check Retry-After header if available
    const retryAfter = error.details?.retryAfter
    const delay = retryAfter
      ? retryAfter * 1000
      : Math.min(
          config.baseDelayMs * Math.pow(2, attempt) + Math.random() * 200,
          config.maxDelayMs
        )

    console.warn(`[supabase-retry] Attempt ${attempt + 1}/${config.maxRetries}, waiting ${delay}ms`)
    await new Promise((resolve) => setTimeout(resolve, delay))
  }

  throw new Error('Unreachable')
}

// Usage — wraps any Supabase query
const users = await withRetry(() =>
  supabase.from('users').select('id, email, created_at').eq('active', true)
)

Pagination to reduce payload and stay within limits:

// Use .range() to paginate — reduces response size and avoids timeouts
async function fetchPaginated<T>(
  table: string,
  pageSize = 100,
  filters?: (query: any) => any
): Promise<T[]> {
  const allRows: T[] = []
  let from = 0

  while (true) {
    let query = supabase.from(table).select('*', { count: 'exact' })
    if (filters) query = filters(query)

    const { data, error, count } = await query.range(from, from + pageSize - 1)

    if (error) throw error
    if (!data || data.length === 0) break

    allRows.push(...(data as T[]))
    from += pageSize

    // Stop if we've fetched everything
    if (count !== null && from >= count) break
  }

  return allRows
}

// Usage
const allProducts = await fetchPaginated('products', 100, (q) =>
  q.eq('status', 'active').order('created_at', { ascending: false })
)

// Simple single-page fetch with .range()
const { data } = await supabase
  .from('orders')
  .select('id, total, status')
  .range(0, 99)  // First 100 rows (0-indexed)
  .order('created_at', { ascending: false })

Monitor usage via the Dashboard:

  1. Navigate to Dashboard > Reports > API Usage
  2. Check the "API Requests" chart for RPM/RPD trends
  3. Review "Database" section for connection count and pool utilization
  4. Set up alerts in Dashboard > Settings > Notifications for:
    • API request threshold (e.g., 80% of RPM limit)
    • Database connection saturation
    • Storage bandwidth approaching limit

Batch operations to reduce request count:

// BAD: N individual inserts = N requests against your RPM
// for (const item of items) await supabase.from('items').insert(item)

// GOOD: single batch insert (max ~1000 rows per request)
const { data, error } = await supabase
  .from('items')
  .upsert(batchOfItems, { onConflict: 'external_id' })
  .select()

// For larger batches, chunk into groups
function chunk<T>(arr: T[], size: number): T[][] {
  return Array.from({ length: Math.ceil(arr.length / size) }, (_, i) =>
    arr.slice(i * size, i * size + size)
  )
}

for (const batch of chunk(largeDataset, 500)) {
  await withRetry(() =>
    supabase.from('items').upsert(batch, { onConflict: 'external_id' }).select()
  )
}

Output

After applying this skill you will have:

  • Clear understanding of rate limits per tier (Free: 500 RPM / 50K RPD, Pro: 5K RPM / 1M RPD)
  • Connection pooling configured via Supavisor (port 6543 transaction mode for serverless)
  • Retry wrapper with exponential backoff handling 429 errors
  • Paginated queries using .range(0, 99) to reduce payload size
  • Batch upsert pattern reducing N requests to 1
  • Dashboard monitoring configured for API usage alerts

Error Handling

ErrorCauseSolution
429 Too Many RequestsExceeded RPM or RPD limitApply withRetry backoff; reduce concurrency; upgrade tier
PGRST000: could not connectConnection pool exhaustedSwitch to Supavisor transaction mode (port 6543); reduce concurrent queries
Auth over_request_rate_limitToo many signups/logins from one IPAdd CAPTCHA; configure custom auth rate limits in Dashboard
Storage 413 Payload Too LargeFile exceeds tier limitUse TUS resumable upload; check tier file size limit
Realtime too_many_connectionsConcurrent connection limit reachedUnsubscribe unused channels; upgrade to Pro for 500 connections
Edge Function BOOT_ERRORCold start timeout or memory exceededReduce bundle size; avoid large imports at top level
pgbouncer=true errors with PrismaMissing connection string parameterAppend ?pgbouncer=true to pooler connection string on port 6543

Examples

Example 1 — Serverless Edge Function with rate-limit-safe client:

// supabase/functions/process-webhook/index.ts
import { serve } from 'https://deno.land/std@0.177.0/http/server.ts'
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'

serve(async (req) => {
  const supabase = createClient(
    Deno.env.get('SUPABASE_URL')!,
    Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
  )

  const payload = await req.json()

  // Batch insert webhook events (single request vs N)
  const { error } = await supabase
    .from('webhook_events')
    .insert(payload.events.map((e: any) => ({
      type: e.type,
      data: e.data,
      received_at: new Date().toISOString(),
    })))

  if (error) {
    console.error('Insert failed:', error.message)
  

---

*Content truncated.*

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

6814

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

2412

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

379

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

978

performing-security-audits

jeremylongshore

This skill allows Claude to conduct comprehensive security audits of code, infrastructure, and configurations. It leverages various tools within the security-pro-pack plugin, including vulnerability scanning, compliance checking, cryptography review, and infrastructure security analysis. Use this skill when a user requests a "security audit," "vulnerability assessment," "compliance review," or any task involving identifying and mitigating security risks. It helps to ensure code and systems adhere to security best practices and compliance standards.

86

django-view-generator

jeremylongshore

Generate django view generator operations. Auto-activating skill for Backend Development. Triggers on: django view generator, django view generator Part of the Backend Development skill category. Use when working with django view generator functionality. Trigger with phrases like "django view generator", "django generator", "django".

15

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.