schema-optimization-orchestrator
Multi-phase schema optimization workflow orchestrator. Creates session directories, spawns phase agents sequentially, validates outputs, aggregates results. Trigger: "run schema optimization", "optimize schema workflow", "execute schema phases"
Install
mkdir -p .claude/skills/schema-optimization-orchestrator && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7602" && unzip -o skill.zip -d .claude/skills/schema-optimization-orchestrator && rm skill.zipInstalls to .claude/skills/schema-optimization-orchestrator
About this skill
Schema Optimization Orchestrator
Runs a multi-phase schema optimization workflow with strict validation and evidence collection.
Workflow Pattern
This is a test harness pattern:
- Creates isolated session directory per run
- Spawns 5 phase agents sequentially
- Each phase reads reference docs, runs scripts, writes reports
- Validates JSON outputs and file artifacts
- Aggregates final summary
Inputs (JSON)
{
"skill_dir": "/absolute/path/to/.claude/skills/schema-optimization",
"input_folder": "/path/to/bigquery/export",
"extraction_type": "bigquery_json",
"session_dir_base": ".claude/skills/schema-optimization/reports/runs"
}
Required:
- skill_dir: Absolute path to this skill directory
- input_folder: Path to data to analyze
- extraction_type: Type of data extraction (e.g., "bigquery_json")
Optional:
- session_dir_base: Where to create run directories (default: reports/runs)
Orchestration Steps
1. Create Session Directory
TIMESTAMP=$(date +%Y-%m-%d_%H%M%S)
SESSION_DIR="${session_dir_base}/${TIMESTAMP}"
mkdir -p "${SESSION_DIR}"
2. Run Phase 1: Initial Schema Analysis
Spawn Phase 1 agent with:
{
"skill_dir": "<skill_dir>",
"session_dir": "<SESSION_DIR>",
"reference_path": "<skill_dir>/references/01-phase-1.md",
"input_folder": "<input_folder>",
"extraction_type": "<extraction_type>"
}
Expected output:
{
"status": "complete",
"report_path": "<SESSION_DIR>/01-initial-schema-analysis.md",
"schema_summary": {
"total_tables": 0,
"total_fields": 0,
"key_findings": []
}
}
Validation:
- JSON parse succeeds
statusis "complete"report_pathfile existsschema_summaryhas required keys
3. Run Phase 2: Field Utilization Analysis
Spawn Phase 2 agent with:
{
"skill_dir": "<skill_dir>",
"session_dir": "<SESSION_DIR>",
"reference_path": "<skill_dir>/references/02-phase-2.md",
"phase1_report_path": "<phase1_report_path>",
"input_folder": "<input_folder>"
}
Expected output:
{
"status": "complete",
"report_path": "<SESSION_DIR>/02-field-utilization-analysis.md",
"utilization_summary": {
"unused_fields": [],
"low_utilization_fields": [],
"recommendations": []
}
}
4. Run Phase 3: Impact Assessment
Spawn Phase 3 agent with:
{
"skill_dir": "<skill_dir>",
"session_dir": "<SESSION_DIR>",
"reference_path": "<skill_dir>/references/03-phase-3.md",
"phase1_report_path": "<phase1_report_path>",
"phase2_report_path": "<phase2_report_path>",
"input_folder": "<input_folder>"
}
Expected output:
{
"status": "complete",
"report_path": "<SESSION_DIR>/03-impact-assessment.md",
"impact_summary": {
"high_risk_changes": [],
"medium_risk_changes": [],
"low_risk_changes": [],
"estimated_savings": {}
}
}
5. Run Phase 4: Verification with Script
Spawn Phase 4 agent with:
{
"skill_dir": "<skill_dir>",
"session_dir": "<SESSION_DIR>",
"reference_path": "<skill_dir>/references/04-phase-4-verify-with-script.md",
"phase2_report_path": "<phase2_report_path>",
"phase3_report_path": "<phase3_report_path>",
"input_folder": "<input_folder>",
"script_path": "<skill_dir>/scripts/analyze_field_utilization.sh",
"output_folder_path": "<input_folder>"
}
Expected output:
{
"status": "complete",
"report_path": "<SESSION_DIR>/04-field-utilization-verification.md",
"verification_summary": {
"files_analyzed": 0,
"conclusions_confirmed": [],
"conclusions_revised": [],
"unexpected_findings": [],
"revised_action_items": []
}
}
6. Run Phase 5: Final Recommendations
Spawn Phase 5 agent with:
{
"skill_dir": "<skill_dir>",
"session_dir": "<SESSION_DIR>",
"reference_path": "<skill_dir>/references/05-phase-5.md",
"phase1_report_path": "<phase1_report_path>",
"phase2_report_path": "<phase2_report_path>",
"phase3_report_path": "<phase3_report_path>",
"phase4_report_path": "<phase4_report_path>"
}
Expected output:
{
"status": "complete",
"report_path": "<SESSION_DIR>/05-final-recommendations.md",
"recommendations_summary": {
"priority_actions": [],
"implementation_plan": [],
"success_metrics": []
}
}
Output (JSON Only)
{
"status": "complete",
"session_dir": "<SESSION_DIR>",
"timestamp": "YYYY-MM-DD_HHMMSS",
"phase_reports": {
"phase1": "<SESSION_DIR>/01-initial-schema-analysis.md",
"phase2": "<SESSION_DIR>/02-field-utilization-analysis.md",
"phase3": "<SESSION_DIR>/03-impact-assessment.md",
"phase4": "<SESSION_DIR>/04-field-utilization-verification.md",
"phase5": "<SESSION_DIR>/05-final-recommendations.md"
},
"final_summary": {
"total_tables": 0,
"total_fields": 0,
"unused_fields": 0,
"optimization_opportunities": 0,
"estimated_savings_pct": 0,
"verification_status": "confirmed"
}
}
Error Handling
If any phase fails:
- Stop execution
- Return error status with phase details
- Preserve partial reports for debugging
{
"status": "error",
"failed_phase": 3,
"error_message": "Phase 3 agent failed validation",
"session_dir": "<SESSION_DIR>",
"completed_phases": ["phase1", "phase2"]
}
Validation Rules
After each phase:
- Parse returned JSON (fail if invalid)
- Check
statusis "complete" (fail if not) - Verify
report_pathexists on disk (fail if not) - Validate phase-specific summary keys (fail if missing)
Implementation Notes
- Use Task tool to spawn phase agents
- Pass exact file paths (no wildcards)
- Session directory must be absolute path
- All reports must be written before returning
- No terminal output except final JSON
Example Usage
User: "Run schema optimization on my BigQuery export"
Claude: [Creates session directory]
Claude: [Spawns Phase 1 agent]
Claude: [Validates Phase 1 output]
Claude: [Spawns Phase 2 agent with Phase 1 report]
Claude: [... continues through Phase 5]
Claude: [Returns final JSON summary]
Files Created Per Run
reports/runs/2025-01-15_143022/
├── 01-initial-schema-analysis.md
├── 02-field-utilization-analysis.md
├── 03-impact-assessment.md
├── 04-field-utilization-verification.md
└── 05-final-recommendations.md
Each file is evidence of work completed.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversBoost Postgres performance with Postgres MCP Pro—AI-driven index tuning, health checks, and safe, intelligent SQL optimi
Seamlessly integrate with Odoo ERP for advanced business record management, automation, and secure data workflows via XM
Boost AI assistants with a unified DataForSEO MCP server interface. This project offers modular tools—SERP, Keywords, Ba
Anubis streamlines artificial intelligence development software with AI for software development, using role-based agent
Access portfolio optimization, Yahoo Finance historical prices, and advanced analytics with QuantConnect for powerful al
Leverage Alibaba Cloud DataWorks Open API tools with TypeScript and Zod for dynamic project management and schema-valida
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.