local-cluster-manager
Manage local multigres cluster components (multipooler, pgctld, multiorch, multigateway) - start/stop services, view logs, connect with psql, test S3 backups locally
Install
mkdir -p .claude/skills/local-cluster-manager && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2700" && unzip -o skill.zip -d .claude/skills/local-cluster-manager && rm skill.zipInstalls to .claude/skills/local-cluster-manager
About this skill
Local Cluster Manager
Manage local multigres cluster - both cluster-wide operations and individual components.
When to Use This Skill
Invoke this skill when the user asks to:
- Start/stop/restart the entire cluster or individual components
- Start cluster with observability (OTel, Grafana, Prometheus)
- Teardown and restart the full stack (cluster + observability)
- View logs for any component
- Connect to multipooler or multigateway with psql
- Check status of cluster components
- Check multipooler topology status (PRIMARY/REPLICA roles)
- Check if PostgreSQL instances are in recovery mode
- Test S3 backups (initialize cluster with S3, create/list/restore backups)
- Configure or troubleshoot S3 backup settings
Performance Optimization
Parse ./multigres_local/multigres.yaml once when this skill is first invoked and cache the cluster configuration in memory for the duration of the conversation. Use the cached data for all subsequent commands. Only re-parse if the user explicitly asks to "reload config" or if a command fails due to stale config.
Cluster-Wide Operations
Start entire cluster:
./bin/multigres cluster start
Stop entire cluster:
./bin/multigres cluster stop
Stop entire cluster and delete all cluster data:
./bin/multigres cluster stop --clean
Check cluster status:
./bin/multigres cluster status
Initialize new cluster:
./bin/multigres cluster init
Get all multipoolers from topology:
./bin/multigres getpoolers
Returns JSON with all multipoolers, their cells, service IDs, ports, and pooler directories.
Get detailed status for a specific multipooler:
./bin/multigres getpoolerstatus --cell <cell-name> --service-id <service-id>
Returns detailed status including:
pooler_type: 1 = PRIMARY, 2 = REPLICApostgres_role: "primary" or "standby"postgres_running: Whether PostgreSQL is runningwal_position: Current WAL positionconsensus_term: Current consensus termprimary_status: (for PRIMARY) connected followers and sync replication configreplication_status: (for REPLICA) replication lag and primary connection info
Example:
./bin/multigres getpoolerstatus --cell zone1 --service-id thhcdhbp
Check PostgreSQL recovery mode directly:
psql -h <pooler-dir>/pg_sockets -p <pg-port> -U postgres -d postgres -c "SELECT pg_is_in_recovery();"
Returns t (true) if in recovery/standby mode, f (false) if primary.
S3 Backup Testing
Test S3 backups using AWS S3. When the user wants to test S3 backups:
Configuration Caching: When S3 configuration values are first provided, cache them in memory for the duration of the conversation. Reuse these cached values for all subsequent S3 operations. Only re-prompt if:
- The user explicitly asks to change the configuration
- A command fails due to invalid/expired credentials
- The values have never been provided in this conversation
-
Prompt for S3 configuration using AskUserQuestion (only if not already cached):
- Path to AWS credentials file (e.g.,
./.staging-awsor~/.aws/credentials) - S3 backup URL (e.g.,
s3://bucket-name/backups/) - AWS region (e.g.,
us-east-1)
- Path to AWS credentials file (e.g.,
-
Check/source credentials:
# Check if AWS credentials are already set
env | grep AWS_
# If not, source the credentials file (path from user)
source <credentials-file-path>
# Verify credentials are now set
env | grep AWS_
IMPORTANT:
- NEVER commit AWS credentials files to git
- Avoid printing credentials to the terminal
- Credentials file should contain: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN (if using temporary credentials)
- Initialize cluster with S3:
./bin/multigres cluster stop --clean
rm -rf multigres_local
./bin/multigres cluster init \
--backup-url=<s3-url-from-user> \
--region=<region-from-user>
-
Start cluster (use standard cluster start command)
-
Verify S3 configuration:
grep -r "aws_access_key_id\|aws_secret_access_key\|region\|repo1-s3" ./multigres_local/data/pooler_*/pgbackrest.conf
Should see AWS credentials and S3 configuration in all pgbackrest.conf files.
Backup Commands
Create backup:
./bin/multigres cluster backup
List all backups:
./bin/multigres cluster list-backups
Restore from backup:
./bin/multigres cluster restore --backup-label <label>
Troubleshooting S3 Issues
Missing/expired credentials:
# Re-source credentials file
source <credentials-file-path>
# Verify they're set
env | grep AWS_ | wc -l # Should show 3+ environment variables
# Reinitialize cluster to pick up new credentials
./bin/multigres cluster stop --clean
rm -rf multigres_local
./bin/multigres cluster init --backup-url=<s3-url> --region=<region>
Check pgbackrest logs for errors:
# View recent errors
tail -100 ./multigres_local/data/pooler_*/pg_data/log/pgbackrest-*.log
# Follow logs in real-time
tail -f ./multigres_local/data/pooler_*/pg_data/log/pgbackrest-*.log
Verify S3 bucket access:
# Use AWS CLI to test bucket access (if installed)
aws s3 ls <s3-bucket-path> --region <region>
Observability Stack
Start the observability stack (Grafana + Prometheus + Loki + Tempo) for metrics, traces, and logs visualization.
Start cluster with observability:
# 1. Start observability stack (separate terminal, runs in foreground)
demo/local/run-observability.sh
# 2. Start cluster with OTel export (separate terminal)
demo/local/multigres-with-otel.sh cluster start --config-path <config-path>
Generate traffic with pgbench:
PGPASSWORD=postgres pgbench -h localhost -p 15432 -U postgres -i postgres
PGPASSWORD=postgres pgbench -h localhost -p 15432 -U postgres -c 4 -j 2 -T 300 -P 5 postgres
View telemetry:
- Grafana Dashboard: http://localhost:3000/d/multigres-overview
- Grafana Explore (ad-hoc PromQL): http://localhost:3000/explore
- Prometheus UI: http://localhost:9090
Teardown (stop in this order to avoid OTel export errors):
# 1. Stop the cluster first
./bin/multigres cluster stop --config-path <config-path>
# 2. Stop the observability stack
docker rm -f multigres-observability
Full restart:
# Teardown
./bin/multigres cluster stop --config-path <config-path>
docker rm -f multigres-observability
# Start
demo/local/run-observability.sh # terminal 1
demo/local/multigres-with-otel.sh cluster start --config-path <config-path> # terminal 2
Observability ports:
| Service | Port |
|---|---|
| Grafana | 3000 |
| OTLP (HTTP) | 4318 |
| Prometheus | 9090 |
| Loki | 3100 |
| Tempo | 3200 |
Individual Component Operations
Configuration
-
Parse the config: Read
./multigres_local/multigres.yamlto discover available components and their IDs -
Component ID mapping:
- multipooler IDs: extracted from
.provisioner-config.cells.<zone>.multipooler.service-id - pgctld uses the same IDs as multipooler
- multiorch has separate IDs for each zone
- multigateway has separate IDs for each zone
- multipooler IDs: extracted from
-
If no ID provided: Use AskUserQuestion to let the user select which instance to operate on
- Show available IDs with their zone names
- Example: "xf42rpl6 (zone1)", "hm9hmxzm (zone2)", "n6t8hvgl (zone3)"
Commands
Stop pgctld:
./bin/pgctld stop --pooler-dir <pooler-dir-from-config>
Start pgctld:
./bin/pgctld start --pooler-dir <pooler-dir-from-config>
Restart pgctld (as standby):
./bin/pgctld restart --pooler-dir <pooler-dir-from-config> --as-standby
Check pgctld status:
./bin/pgctld status --pooler-dir <pooler-dir-from-config>
View logs:
- multipooler:
./multigres_local/logs/dbs/postgres/multipooler/[id].log - pgctld:
./multigres_local/logs/dbs/postgres/pgctld/[id].log - multiorch:
./multigres_local/logs/dbs/postgres/multiorch/[id].log - multigateway:
./multigres_local/logs/dbs/postgres/multigateway/[id].log - PostgreSQL:
./multigres_local/data/pooler_[id]/pg_data/postgresql.log
Tail logs:
tail -f <log-path>
Connect to multipooler (via Unix socket):
psql -h <pooler-dir>/pg_sockets -p <pg-port> -U postgres -d postgres
Where:
- pooler-dir is from
.provisioner-config.cells.<zone>.multipooler.pooler-dir - pg-port is from
.provisioner-config.cells.<zone>.pgctld.pg-port - PostgreSQL socket is at
<pooler-dir>/pg_sockets/.s.PGSQL.<pg-port>
Example:
psql -h ./multigres_local/data/pooler_xf42rpl6/pg_sockets -p 25432 -U postgres -d postgres
Connect to multigateway (via TCP):
psql -h localhost -p <pg-port> -U postgres -d postgres
Where:
- pg-port is from
.provisioner-config.cells.<zone>.multigateway.pg-port
Example:
psql -h localhost -p 15432 -U postgres -d postgres
Config Paths
Extract from YAML config at .provisioner-config.cells.<zone>.pgctld.pooler-dir
Examples
Cluster-wide:
User: "start the cluster"
- Execute:
./bin/multigres cluster start
User: "stop cluster"
- Execute:
./bin/multigres cluster stop
User: "cluster status"
- Execute:
./bin/multigres cluster status
User: "show me all multipoolers" or "get poolers"
- Execute:
./bin/multigres getpoolers
User: "check if multipoolers are in recovery" or "check multipooler status"
- Parse config to get all zones and service IDs
- Execute:
./bin/multigres getpoolerstatus --cell <zone> --service-id <id>for each - Display pooler_type (PRIMARY/REPLICA) and postgres_role (primary/standby)
User: "check zone1 multipooler status"
- Look up service ID for zone1
- Execute:
./bin/multigres getpoolerstatus --cell zone1 --service-id <id>
Observability:
User: "start cluster with otel" or "start
Content truncated.
More by multigres
View all skills by multigres →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversBuild persistent semantic networks for enterprise & engineering data management. Enable data persistence and memory acro
Basic Memory is a knowledge management system that builds a persistent semantic graph in markdown, locally and securely.
Interact with Kubernetes resources using natural language instead of complex kubectl commands. Simplify cluster manageme
Unlock powerful OLAP database analytics on ClickHouse MCP Server. Manage OLAP data with seamless online analytical proce
Control and monitor Kubernetes clusters easily. Simplify your K8s management and debugging with powerful features.
Unlock AI-powered automation for Postman for API testing. Streamline workflows, code sync, and team collaboration with f
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.