aws-aurora
AWS Aurora Serverless v2, RDS Proxy, Data API, connection pooling
Install
mkdir -p .claude/skills/aws-aurora && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5669" && unzip -o skill.zip -d .claude/skills/aws-aurora && rm skill.zipInstalls to .claude/skills/aws-aurora
About this skill
AWS Aurora Skill
Load with: base.md + [typescript.md | python.md]
Amazon Aurora is a MySQL/PostgreSQL-compatible relational database with serverless scaling, high availability, and enterprise features.
Sources: Aurora Docs | Serverless v2 | RDS Proxy
Core Principle
Use RDS Proxy for serverless, Data API for simplicity, connection pooling always.
Aurora excels at ACID-compliant workloads. For serverless architectures (Lambda), always use RDS Proxy or Data API to handle connection management. Never open raw connections from Lambda functions.
Aurora Options
| Option | Best For |
|---|---|
| Aurora Serverless v2 | Variable workloads, auto-scaling (0.5-128 ACUs) |
| Aurora Provisioned | Predictable workloads, maximum performance |
| Aurora Global | Multi-region, disaster recovery |
| Data API | Serverless without VPC, simple HTTP access |
| RDS Proxy | Connection pooling for Lambda, high concurrency |
Connection Strategies
Strategy 1: RDS Proxy (Recommended for Lambda)
Lambda → RDS Proxy → Aurora
(pool)
- Connection pooling and reuse
- Automatic failover handling
- IAM authentication support
- Works with existing SQL clients
Strategy 2: Data API (Simplest for Serverless)
Lambda → Data API (HTTP) → Aurora
- No VPC required
- No connection management
- Higher latency per query
- Limited to Aurora Serverless
Strategy 3: Direct Connection (Not for Lambda)
App Server → Aurora
(persistent connection)
- Only for long-running servers (ECS, EC2)
- Manage connection pool yourself
- Not suitable for serverless
RDS Proxy Setup
Create Proxy (AWS Console/CDK)
// CDK example
import * as rds from 'aws-cdk-lib/aws-rds';
const proxy = new rds.DatabaseProxy(this, 'Proxy', {
proxyTarget: rds.ProxyTarget.fromCluster(cluster),
secrets: [cluster.secret!],
vpc,
securityGroups: [proxySecurityGroup],
requireTLS: true,
idleClientTimeout: cdk.Duration.minutes(30),
maxConnectionsPercent: 90,
maxIdleConnectionsPercent: 10,
borrowTimeout: cdk.Duration.seconds(30)
});
Connect via Proxy (TypeScript/Node.js)
// lib/db.ts
import { Pool } from 'pg';
import { Signer } from '@aws-sdk/rds-signer';
const signer = new Signer({
hostname: process.env.RDS_PROXY_ENDPOINT!,
port: 5432,
username: process.env.DB_USER!,
region: process.env.AWS_REGION!
});
// IAM authentication
async function getPool(): Promise<Pool> {
const token = await signer.getAuthToken();
return new Pool({
host: process.env.RDS_PROXY_ENDPOINT,
port: 5432,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: token,
ssl: { rejectUnauthorized: true },
max: 1, // Single connection for Lambda
idleTimeoutMillis: 120000,
connectionTimeoutMillis: 10000
});
}
// Usage in Lambda
let pool: Pool | null = null;
export async function handler(event: any) {
if (!pool) {
pool = await getPool();
}
const result = await pool.query('SELECT * FROM users WHERE id = $1', [event.userId]);
return result.rows[0];
}
Proxy Configuration Best Practices
# Key settings for Lambda workloads
MaxConnectionsPercent: 90 # Use most of DB connections
MaxIdleConnectionsPercent: 10 # Keep some idle for bursts
ConnectionBorrowTimeout: 30s # Wait for available connection
IdleClientTimeout: 30min # Close idle proxy connections
# Monitor these CloudWatch metrics:
# - DatabaseConnectionsCurrentlyBorrowed
# - DatabaseConnectionsCurrentlySessionPinned
# - QueryDatabaseResponseLatency
Data API (HTTP-based)
Enable Data API
# Must be Aurora Serverless
aws rds modify-db-cluster \
--db-cluster-identifier my-cluster \
--enable-http-endpoint
TypeScript with Data API Client v2
npm install data-api-client
// lib/db.ts
import DataAPIClient from 'data-api-client';
const db = DataAPIClient({
secretArn: process.env.DB_SECRET_ARN!,
resourceArn: process.env.DB_CLUSTER_ARN!,
database: process.env.DB_NAME!,
region: process.env.AWS_REGION!
});
// Simple query
const users = await db.query('SELECT * FROM users WHERE active = :active', {
active: true
});
// Insert with returning
const result = await db.query(
'INSERT INTO users (email, name) VALUES (:email, :name) RETURNING *',
{ email: 'user@test.com', name: 'Test User' }
);
// Transaction
const transaction = await db.transaction();
try {
await transaction.query('UPDATE accounts SET balance = balance - :amount WHERE id = :from', {
amount: 100, from: 1
});
await transaction.query('UPDATE accounts SET balance = balance + :amount WHERE id = :to', {
amount: 100, to: 2
});
await transaction.commit();
} catch (error) {
await transaction.rollback();
throw error;
}
Python with boto3
# requirements.txt
boto3>=1.34.0
# db.py
import boto3
import os
rds_data = boto3.client('rds-data')
CLUSTER_ARN = os.environ['DB_CLUSTER_ARN']
SECRET_ARN = os.environ['DB_SECRET_ARN']
DATABASE = os.environ['DB_NAME']
def execute_sql(sql: str, parameters: list = None):
"""Execute SQL via Data API."""
params = {
'resourceArn': CLUSTER_ARN,
'secretArn': SECRET_ARN,
'database': DATABASE,
'sql': sql
}
if parameters:
params['parameters'] = parameters
return rds_data.execute_statement(**params)
def get_user(user_id: int):
result = execute_sql(
'SELECT * FROM users WHERE id = :id',
[{'name': 'id', 'value': {'longValue': user_id}}]
)
return result.get('records', [])
def create_user(email: str, name: str):
result = execute_sql(
'INSERT INTO users (email, name) VALUES (:email, :name) RETURNING *',
[
{'name': 'email', 'value': {'stringValue': email}},
{'name': 'name', 'value': {'stringValue': name}}
]
)
return result.get('generatedFields')
# Transaction
def transfer_funds(from_id: int, to_id: int, amount: float):
transaction = rds_data.begin_transaction(
resourceArn=CLUSTER_ARN,
secretArn=SECRET_ARN,
database=DATABASE
)
transaction_id = transaction['transactionId']
try:
execute_sql(
'UPDATE accounts SET balance = balance - :amount WHERE id = :id',
[
{'name': 'amount', 'value': {'doubleValue': amount}},
{'name': 'id', 'value': {'longValue': from_id}}
]
)
execute_sql(
'UPDATE accounts SET balance = balance + :amount WHERE id = :id',
[
{'name': 'amount', 'value': {'doubleValue': amount}},
{'name': 'id', 'value': {'longValue': to_id}}
]
)
rds_data.commit_transaction(
resourceArn=CLUSTER_ARN,
secretArn=SECRET_ARN,
transactionId=transaction_id
)
except Exception as e:
rds_data.rollback_transaction(
resourceArn=CLUSTER_ARN,
secretArn=SECRET_ARN,
transactionId=transaction_id
)
raise e
Prisma with Aurora
Setup (VPC Connection via RDS Proxy)
npm install prisma @prisma/client
npx prisma init
// prisma/schema.prisma
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String
posts Post[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Post {
id Int @id @default(autoincrement())
title String
content String?
published Boolean @default(false)
author User @relation(fields: [authorId], references: [id])
authorId Int
createdAt DateTime @default(now())
}
Environment
# Use RDS Proxy endpoint
DATABASE_URL="postgresql://user:password@proxy-endpoint.proxy-xxx.region.rds.amazonaws.com:5432/mydb?schema=public&connection_limit=1"
Lambda Handler with Prisma
// handlers/users.ts
import { PrismaClient } from '@prisma/client';
// Reuse client across invocations
let prisma: PrismaClient | null = null;
function getPrisma(): PrismaClient {
if (!prisma) {
prisma = new PrismaClient({
datasources: {
db: { url: process.env.DATABASE_URL }
}
});
}
return prisma;
}
export async function handler(event: any) {
const db = getPrisma();
const users = await db.user.findMany({
include: { posts: true },
take: 10
});
return {
statusCode: 200,
body: JSON.stringify(users)
};
}
Aurora Serverless v2
Capacity Configuration
// CDK
const cluster = new rds.DatabaseCluster(this, 'Cluster', {
engine: rds.DatabaseClusterEngine.auroraPostgres({
version: rds.AuroraPostgresEngineVersion.VER_15_4
}),
serverlessV2MinCapacity: 0.5, // Minimum ACUs
serverlessV2MaxCapacity: 16, // Maximum ACUs
writer: rds.ClusterInstance.serverlessV2('writer'),
readers: [
rds.ClusterInstance.serverlessV2('reader', { scaleWithWriter: true })
],
vpc,
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }
});
Capacity Guidelines
| Workload | Min ACUs | Max ACUs |
|---|---|---|
| Dev/Test | 0.5 | 2 |
| Small Production | 2 | 8 |
| Medium Production | 4 | 32 |
| Large Production | 8 | 128 |
Handle Scale-to-Zero Wake-up
// Data API Client v2 handles this automatically
// For direct connections, implement retry logic:
import { Pool } from 'pg';
async function queryWithRetry(
---
*Content truncated.*
More by alinaqi
View all skills by alinaqi →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversMCP Toolbox for Databases by Google. An open-source server that lets AI agents query Cloud SQL, Spanner, AlloyDB, and ot
Powerful MCP server for Slack with advanced API, message fetching, webhooks, and enterprise features. Robust Slack data
Easily manage serverless Postgres databases on Neon, with seamless AWS integration for scalable, efficient database solu
Easily manage Neon's serverless Postgres databases on AWS. Streamline your cloud database deployment with reliable, scal
Xero enables seamless financial data integration and accounting operations via xero software and OAuth2 for automated wo
Easily interact with MySQL databases: execute queries, manage connections, and streamline your data workflow using MySQL
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.