
Structured Workflow
Enforces disciplined programming practices by guiding AI assistants through structured workflows for refactoring, feature development, testing, and TDD with mandatory audit phases and validation.
Provides structured workflow guidance for disciplined software engineering practices through four specialized methodologies (refactoring, feature creation, test writing, and test-driven development) with phase-specific validation, file operation safety rules, and session state tracking for systematic code development.
What it does
- Start structured refactoring workflows with safety validation
- Create feature development workflows with integrated testing
- Guide Test-Driven Development with Red-Green-Refactor cycles
- Build custom workflows with configurable phases
- Track session state across development phases
- Get phase-specific guidance for code auditing and analysis
Best for
About Structured Workflow
Structured Workflow is a community-built MCP server published by kingdomseed that provides AI assistants with tools and capabilities via the Model Context Protocol. Structured Workflow guides disciplined software engineering via refactoring, feature creation, and test driven developme It is categorized under developer tools, productivity. This server exposes 20 tools that AI clients can invoke during conversations and coding sessions.
How to install
You can install Structured Workflow in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Structured Workflow is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (20)
Start a structured refactoring workflow to improve existing code without changing functionality
Start a structured workflow for adding new functionality with integrated testing
Start a focused workflow for writing or improving test coverage
Start a Test-Driven Development workflow with Red-Green-Refactor cycles
Build a custom workflow with full control over phases and configuration. Use specific workflow tools (refactor_workflow, create_feature_workflow, etc.) for optimized presets.
Structured Workflow MCP Server
NOTE: I am not currently working on this or actively maintaining it. I learned a few things about prompting and agents while making this MCP server. It has a lot of very valuable ideas that could be used or improved upon as an MCP server but I'm also looking at ways to incorporate the core ideas into Agents, for example, in Claude. The core idea here is that AI should follow specific, pre-determined steps to solving a problem, just like we as humans might do and there may be other ways to achieve this apart from this MCP server.
An MCP server that enforces disciplined programming practices by requiring AI assistants to audit their work and produce verified outputs at each phase of development.
Why I Built This
TLDR: I got tired of repeating "inventory and audit first" across every AI platform and prompt, so I built an MCP server that automatically enforces this disciplined approach. It forces AI to think systematically and follow structured phases instead of jumping straight into code changes.
So I've built an MCP server that fits into my workflow and thinking process while I'm programming. I made it available via npx and you can download it yourself if you want something local.
In essence I was doing some repeated tasks with AI where I wanted it to complete refactoring work for part of a larger project. I was struggling because it was often missing or glossing over key things: classes or systems that already exist (a preferences service for example), creating duplicates of things, or when correcting mistakes, leaving orphaned unused methods/code around places, and when writing tests it would often pull in the wrong imports or put these together in the wrong way resulting in syntax errors but would jump straight into writing the next test without fixing the first one that was broken.
I sort of stumbled on this idea of the model needing to perform an audit and inventory of the current project (or not even the whole project--just one layer or feature in a project) before moving to any kind of implementation phase and it needed a lint iterate lint phase. I tried this with rules with limited success and then prompting with much better success but I was constantly repeating myself.
So I started noodling on this idea of an MCP server that forced the AI to work through a problem in phases or lanes. So that's what this does. There's a number of different workflow styles and I'm open to any other ideas or improvements.
Feel free to check it out if it helps your use case. It's a work in progress but it has been doing a pretty great job for what I'm using it for now. Happy to share more if you are interested.
Features
Enforced Workflow Phases - AI must complete specific phases in order (setup, audit, analysis, planning, implementation, testing, etc.)
Mandatory Output Artifacts - Each phase requires structured documentation or verified outputs before proceeding
Multiple Workflow Types:
- Refactor workflows for code improvement
- Feature development with integrated testing
- Test-focused workflows for coverage improvement
- Test-driven development (TDD) cycles
- Custom workflows for specialized needs
Output Verification - The server validates that outputs contain meaningful content and proper structure
Session State Management - Tracks progress and prevents skipping phases
How It Works
Here's how the AI moves through a structured workflow:
graph TD
A[π Start Workflow] --> B[AI Gets Phase Guidance]
B --> C{Create Phase Output}
C --> D[Auto-Save with Numbered Naming<br/>00-setup-confirmation-2025-01-07.md]
D --> E[Phase Validation]
E --> F{All Phases Done?}
F -->|No| G[Move to Next Phase]
G --> B
F -->|Yes| H[Workflow Complete!]
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#fff3e0
style D fill:#e8f5e8
style E fill:#fff9c4
style H fill:#e8f5e8
What happens at each step:
- Start Workflow - AI calls a workflow tool (refactor_workflow, create_feature_workflow, etc.)
- AI Gets Phase Guidance - Server provides specific instructions for current phase (audit, analyze, implement, etc.)
- Create Phase Output - AI works through the phase and creates documentation/artifacts
- Auto-Save - Files are automatically saved with numbered naming in task directories
- Phase Validation - Server validates outputs meet requirements before proceeding
- Next Phase - Process repeats until workflow is complete
One benefit of this breakdown is that the AI agent receive instruction sets that are relevant to the current phase and not the entire workflow. This can help prevent the AI from getting lost in the weeds of the entire workflow and instead focus on the current phase. An interesting article on this can be read here: LLMs Get Lost In Multi-Turn Conversation
Workflow Output
AI-Generated Documentation
The server suggests numbered workflow files as you progress through phases. The AI assistant handles the actual file creation using its own tools:
workflows/
βββ your-task-name/
β βββ 01-audit-inventory-2025-01-04.md
β βββ 02-compare-analyze-2025-01-04.json
β βββ 03-question-determine-2025-01-04.md
β βββ 04-write-or-refactor-2025-01-04.md
β βββ 05-test-2025-01-04.json
β βββ 06-lint-2025-01-04.json
β βββ 07-iterate-2025-01-04.md
β βββ 08-present-2025-01-04.md
Workflow Architecture
File Handling: The server provides suggested paths and formats but does not directly write files. Instead, it instructs the AI assistant to create these files using its own file system access.
Consistent Naming: Files follow a standardized naming convention with phase numbers, names, and timestamps.
Environment Independence: The architecture works across any environment where the AI has appropriate file system permissions.
Graceful Degradation: If the AI is unable to create files, the workflow continues in memory-only mode - your progress isn't interrupted.
Installation
Quick Start (Recommended) - Zero Installation
Add to your AI assistant config - Uses npx automatically:
π‘ Note: I recommend using
@latestto ensure you always get the newest features and fixes. Without@latest, npx may cache older versions.
VS Code / Cursor / Windsurf - Add to your MCP settings:
{
"mcp": {
"servers": {
"structured-workflow": {
"command": "npx",
"args": ["structured-workflow-mcp@latest"],
"env": {}
}
}
}
}
Claude Desktop - Add to your claude_desktop_config.json:
{
"mcpServers": {
"structured-workflow": {
"command": "npx",
"args": ["structured-workflow-mcp@latest"],
"env": {}
}
}
}
Global Installation (Optional)
You can install globally on your machine using NPM:
npm install -g structured-workflow-mcp
Then use in your AI assistant config:
{
"mcp": {
"servers": {
"structured-workflow": {
"command": "structured-workflow-mcp",
"args": [],
"env": {}
}
}
}
}
With custom output directory:
{
"mcp": {
"servers": {
"structured-workflow": {
"command": "structured-workflow-mcp",
"args": ["--output-dir", "/home/user/workflow-outputs"],
"env": {}
}
}
}
}
Auto-Install via Smithery
Smithery provides a number of ways to install directly into your apps including this way for Claude Desktop:
npx -y @smithery/cli install structured-workflow-mcp --client claude
Manual Installation
For developers, you can clone the repository and build it locally:
git clone https://github.com/kingdomseed/structured-workflow-mcp
cd structured-workflow-mcp
npm install && npm run build
Usage
Once configured in your AI assistant, start with these workflow tools:
mcp__structured-workflow__build_custom_workflow- Create custom workflowsmcp__structured-workflow__refactor_workflow- Structured refactoringmcp__structured-workflow__create_feature_workflow- Feature developmentmcp__structured-workflow__test_workflow- Test coverage workflows
Example Output Artifacts
The server enforces that AI produces structured outputs like these:
AUDIT_INVENTORY Phase Output:
{
"filesAnalyzed": ["lib/auth/user_service.dart", "lib/auth/auth_middleware.dart"],
"dependencies": {
"providers": ["userProvider", "authStateProvider"],
"models": ["User", "AuthToken"]
},
"issues": [
"Single Responsibility Principle violation - handles too many concerns",
"File approaching 366 lines - recommended to keep widgets smaller"
],
"changesList": [
{
"action": "CREATE",
"file": "lib/auth/components/auth_form.dart",
"description": "Extract authentication form logic",
"justification": "Component focused on form validation only"
}
]
}
COMPARE_ANALYZE Phase Output:
{
"approaches": [
{
"name": "Incremental Component Extraction",
"complexity": "Medium",
"risk": "Low",
"timeEstimate": "30-45 minutes"
}
],
"recommendation": "Incremental Component Extraction",
"justification": "Provides best balance of benefits vs. risk",
"selectedImplementationOrder": [
"1. Extract form component (lowest risk)",
"2. Create validation service",
"3. Refactor main view"
]
}
Each phase requires documented analysis and planning before the AI can proceed to implementation.
Tools
Wor
README truncated. View full README on GitHub.
Alternatives
Related Skills
Browse all skillsComprehensive CrewAI framework guide for building collaborative AI agent teams and structured workflows. Use when developing multi-agent systems with CrewAI, creating autonomous AI crews, orchestrating flows, implementing agents with roles and tools, or building production-ready AI automation. Essential for developers building intelligent agent systems, task automation, and complex AI workflows.
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".
Creates educational Teams channel posts for internal knowledge sharing about Claude Code features, tools, and best practices. Applies when writing posts, announcements, or documentation to teach colleagues effective Claude Code usage, announce new features, share productivity tips, or document lessons learned. Provides templates, writing guidelines, and structured approaches emphasizing concrete examples, underlying principles, and connections to best practices like context engineering. Activates for content involving Teams posts, channel announcements, feature documentation, or tip sharing.
Integrate Granola meeting notes into your local development workflow. Use when setting up development workflows, accessing notes programmatically, or syncing meeting outcomes with project tools. Trigger with phrases like "granola dev workflow", "granola development", "granola local setup", "granola developer", "granola coding workflow".
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
Technical workflow for implementing accessible React user interfaces with shadcn/ui, Tailwind CSS, and TanStack Query. Includes 6-phase process with mandatory Style Guide compliance, Context7 best practices consultation, Chrome DevTools validation, and WCAG 2.1 AA accessibility standards. Use after Test Agent, Implementer, and Supabase agents complete their work.