autonomous-feature-planner

1
0
Source

Autonomously plans and specifies system features starting from the user’s most recent command, continuing without further user input until explicitly stopped. Use when the user explicitly invokes autonomous planning to extend a system from a prior command. Trigger keywords: use autonomous-feature-planner, start autonomous planning, autonomous expansion, continuous feature planning.

Install

mkdir -p .claude/skills/autonomous-feature-planner && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4338" && unzip -o skill.zip -d .claude/skills/autonomous-feature-planner && rm skill.zip

Installs to .claude/skills/autonomous-feature-planner

About this skill

Activation Criteria

Activate only when the user explicitly names this skill or clearly instructs continuous, self-directed planning.

Activation requires:

  • Exactly one immediately preceding user command
  • That command must describe or imply a system, product, or process

Do not activate if:

  • The previous command is conversational, evaluative, or meta
  • The previous command is itself a stop instruction
  • The user requests execution, deployment, or real-world action

Foundation Handling

  • The last user command before activation is the foundation.
  • The foundation is immutable and may not be reinterpreted, summarized, or expanded.
  • The foundation establishes the system domain and intent baseline.

If the foundation cannot define a system domain, fail immediately.

Execution Model

This skill operates as a bounded-output autonomous planner. Autonomy applies to sequencing, not to scope invention.

Initialization

  1. Capture the foundation command verbatim.
  2. Derive one explicit system domain statement.
  3. Declare all assumptions required to derive the domain.
  4. Lock the domain and assumptions for the entire session.

If assumptions exceed minimal necessity, fail.

Planning Loop

Each iteration must perform exactly the following steps:

  1. Select exactly one next feature that:
    • Directly fits within the locked system domain
    • Is functionally distinct from all prior features
  2. Define the feature scope using explicit inclusions and exclusions.
  3. Produce a linear, ordered implementation plan with no branches.
  4. Specify:
    • Required inputs
    • Produced outputs
    • Dependencies on prior features
  5. State one verifiable success condition.
  6. Terminate the iteration.

Only one feature is permitted per iteration. No iteration may reference future, unplanned features.

Output Rules

  • Each iteration must be labeled sequentially.
  • Output must be strictly structured and utilitarian.
  • No summaries, retrospectives, vision statements, or meta-commentary.
  • No repetition, restatement, or revision of earlier iterations.

Ambiguity Handling

  • All ambiguity must be resolved during initialization.
  • Resolution must favor the narrowest viable interpretation.
  • No new assumptions may be introduced after initialization.

If ambiguity cannot be resolved without speculation, fail immediately.

Consistency Enforcement

  • All output is append-only.
  • Previously planned features are immutable.
  • If a contradiction is detected, halt immediately with failure.

Scope and Runaway Prevention

  • Features must not generate sub-features.
  • Meta-features about planning, autonomy, or the skill itself are forbidden.
  • Each iteration must be finite and self-contained.
  • The skill must not escalate into abstraction layers or strategy reformulation.

Constraints & Non-Goals

  • No execution, simulation, or external state changes.
  • No file creation or modification.
  • No user interaction during operation.
  • No external tools, memory, or hidden state.
  • No goal invention outside the locked domain.

Failure Behavior

Immediately halt and output a single failure message if:

  • The foundation cannot define a coherent system domain
  • Minimal assumptions are insufficient
  • Internal consistency cannot be preserved
  • Planning would require execution or unverifiable facts

No additional output is permitted after failure.

Stop Condition

Immediately stop all planning when the user issues any command containing:

  • "stop autonomous-feature-planner"
  • "stop planning"
  • "disable autonomous-feature-planner"

After a stop command:

  • Output exactly one character: "."
  • Output no other text, whitespace, or newlines.

seedream-image-gen

openclaw

Generate images via Seedream API (doubao-seedream models). Synchronous generation.

2359

ffmpeg-cli

openclaw

Comprehensive video/audio processing with FFmpeg. Use for: (1) Video transcoding and format conversion, (2) Cutting and merging clips, (3) Audio extraction and manipulation, (4) Thumbnail and GIF generation, (5) Resolution scaling and quality adjustment, (6) Adding subtitles or watermarks, (7) Speed adjustment (slow/fast motion), (8) Color correction and filters.

6723

context-optimizer

openclaw

Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.

3722

a-stock-analysis

openclaw

A股实时行情与分时量能分析。获取沪深股票实时价格、涨跌、成交量,分析分时量能分布(早盘/尾盘放量)、主力动向(抢筹/出货信号)、涨停封单。支持持仓管理和盈亏分析。Use when: (1) 查询A股实时行情, (2) 分析主力资金动向, (3) 查看分时成交量分布, (4) 管理股票持仓, (5) 分析持仓盈亏。

9121

himalaya

openclaw

CLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).

7921

garmin-connect

openclaw

Syncs daily health and fitness data from Garmin Connect into markdown files. Provides sleep, activity, heart rate, stress, body battery, HRV, SpO2, and weight data.

7321

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318399

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

340397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

452339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.