unified-execute-with-file
Universal execution engine consuming unified JSONL task format. Serial task execution with convergence verification, progress tracking via execution.md + execution-events.md.
Install
mkdir -p .claude/skills/unified-execute-with-file && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2380" && unzip -o skill.zip -d .claude/skills/unified-execute-with-file && rm skill.zipInstalls to .claude/skills/unified-execute-with-file
About this skill
Unified-Execute-With-File Workflow
Quick Start
Universal execution engine consuming .task/*.json directory and executing tasks serially with convergence verification and progress tracking.
# Execute from lite-plan output
/codex:unified-execute-with-file PLAN=".workflow/.lite-plan/LPLAN-auth-2025-01-21/.task/"
# Execute from workflow session output
/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/" --auto-commit
# Execute a single task JSON file
/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/IMPL-001.json" --dry-run
# Auto-detect from .workflow/ directories
/codex:unified-execute-with-file
Core workflow: Scan .task/*.json → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress
Key features:
- Directory-based: Consumes
.task/directory containing individual task JSON files - Convergence-driven: Verifies each task's convergence criteria after execution
- Serial execution: Process tasks in topological order with dependency tracking
- Dual progress tracking:
execution.md(overview) +execution-events.md(event stream) - Auto-commit: Optional conventional commits per task
- Dry-run mode: Simulate execution without changes
- Flexible input: Accepts
.task/directory path or a single.jsonfile path
Input format: Each task is a standalone JSON file in .task/ directory (e.g., IMPL-001.json). Use plan-converter to convert other formats to .task/*.json first.
Overview
┌─────────────────────────────────────────────────────────────┐
│ UNIFIED EXECUTE WORKFLOW │
├─────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Load & Validate │
│ ├─ Scan .task/*.json (one task per file) │
│ ├─ Validate schema (id, title, depends_on, convergence) │
│ ├─ Detect cycles, build topological order │
│ └─ Initialize execution.md + execution-events.md │
│ │
│ Phase 2: Pre-Execution Analysis │
│ ├─ Check file conflicts (multiple tasks → same file) │
│ ├─ Verify file existence │
│ ├─ Generate feasibility report │
│ └─ User confirmation (unless dry-run) │
│ │
│ Phase 3: Serial Execution + Convergence Verification │
│ For each task in topological order: │
│ ├─ Check dependencies satisfied │
│ ├─ Record START event │
│ ├─ Execute directly (Read/Edit/Write/Grep/Glob/Bash) │
│ ├─ Verify convergence.criteria[] │
│ ├─ Run convergence.verification command │
│ ├─ Record COMPLETE/FAIL event with verification results │
│ ├─ Update _execution state in task JSON file │
│ └─ Auto-commit if enabled │
│ │
│ Phase 4: Completion │
│ ├─ Finalize execution.md with summary statistics │
│ ├─ Finalize execution-events.md with session footer │
│ ├─ Write back .task/*.json with _execution states │
│ └─ Offer follow-up actions │
│ │
└─────────────────────────────────────────────────────────────┘
Output Structure
${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/
├── execution.md # Plan overview + task table + summary
└── execution-events.md # ⭐ Unified event log (single source of truth)
Additionally, each source .task/*.json file is updated in-place with _execution states.
Implementation Details
Session Initialization
Step 0: Initialize Session
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
// Parse arguments
const autoCommit = $ARGUMENTS.includes('--auto-commit')
const dryRun = $ARGUMENTS.includes('--dry-run')
const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/)
let planPath = planMatch ? planMatch[1] : null
// Auto-detect if no PLAN specified
if (!planPath) {
// Search in order (most recent first):
// .workflow/active/*/.task/
// .workflow/.lite-plan/*/.task/
// .workflow/.req-plan/*/.task/
// .workflow/.planning/*/.task/
// Use most recently modified directory containing *.json files
}
// Resolve path
planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}`
// Generate session ID
const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const random = Math.random().toString(36).substring(2, 9)
const sessionId = `EXEC-${slug}-${dateStr}-${random}`
const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}`
Bash(`mkdir -p ${sessionFolder}`)
Phase 1: Load & Validate
Objective: Scan .task/ directory, parse individual task JSON files, validate schema and dependencies, build execution order.
Step 1.1: Scan .task/ Directory and Parse Task Files
// Determine if planPath is a directory or single file
const isDirectory = planPath.endsWith('/') || Bash(`test -d "${planPath}" && echo dir || echo file`).trim() === 'dir'
let taskFiles, tasks
if (isDirectory) {
// Directory mode: scan for all *.json files
taskFiles = Glob('*.json', planPath)
if (taskFiles.length === 0) throw new Error(`No .json files found in ${planPath}`)
tasks = taskFiles.map(filePath => {
try {
const content = Read(filePath)
const task = JSON.parse(content)
task._source_file = filePath // Track source file for write-back
return task
} catch (e) {
throw new Error(`${path.basename(filePath)}: Invalid JSON - ${e.message}`)
}
})
} else {
// Single file mode: parse one task JSON
try {
const content = Read(planPath)
const task = JSON.parse(content)
task._source_file = planPath
tasks = [task]
} catch (e) {
throw new Error(`${path.basename(planPath)}: Invalid JSON - ${e.message}`)
}
}
if (tasks.length === 0) throw new Error('No tasks found')
Step 1.2: Validate Schema
Validate against unified task schema: ~/.ccw/workflows/cli-templates/schemas/task-schema.json
const errors = []
tasks.forEach((task, i) => {
const src = task._source_file ? path.basename(task._source_file) : `Task ${i + 1}`
// Required fields (per task-schema.json)
if (!task.id) errors.push(`${src}: missing 'id'`)
if (!task.title) errors.push(`${src}: missing 'title'`)
if (!task.description) errors.push(`${src}: missing 'description'`)
if (!Array.isArray(task.depends_on)) errors.push(`${task.id || src}: missing 'depends_on' array`)
// Context block (optional but validated if present)
if (task.context) {
if (task.context.requirements && !Array.isArray(task.context.requirements))
errors.push(`${task.id}: context.requirements must be array`)
if (task.context.acceptance && !Array.isArray(task.context.acceptance))
errors.push(`${task.id}: context.acceptance must be array`)
if (task.context.focus_paths && !Array.isArray(task.context.focus_paths))
errors.push(`${task.id}: context.focus_paths must be array`)
}
// Convergence (required for execution verification)
if (!task.convergence) {
errors.push(`${task.id || src}: missing 'convergence'`)
} else {
if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`)
if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`)
if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`)
}
// Flow control (optional but validated if present)
if (task.flow_control) {
if (task.flow_control.target_files && !Array.isArray(task.flow_control.target_files))
errors.push(`${task.id}: flow_control.target_files must be array`)
}
// New unified schema fields (backward compatible addition)
if (task.focus_paths && !Array.isArray(task.focus_paths))
errors.push(`${task.id}: focus_paths must be array`)
if (task.implementation && !Array.isArray(task.implementation))
errors.push(`${task.id}: implementation must be array`)
if (task.files && !Array.isArray(task.files))
errors.push(`${task.id}: files must be array`)
})
if (errors.length) {
// Report errors, stop execution
}
Step 1.3: Build Execution Order
// 1. Validate dependency references
const taskIds = new Set(tasks.map(t => t.id))
tasks.forEach(task => {
task.depends_on.forEach(dep => {
if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`)
})
})
// 2. Detect cycles (DFS)
function detectCycles(tasks) {
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
const visited = new Set(), inStack = new Set(), cycles = []
function dfs(node, path) {
if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return }
if (visited.has(node)) return
visited.add(node); inStack.add(node)
;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
inStack.delete(node)
}
tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) })
return cycles
}
const cycles = detectCycles(tasks)
if (cycles.length) errors.push(`Circular dependencies: ${cycles.joi
---
*Content truncated.*
More by catlog22
View all skills by catlog22 →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversUnlock powerful OLAP database analytics on ClickHouse MCP Server. Manage OLAP data with seamless online analytical proce
Extract text and audio from URLs, docs, videos, and images with AI voice generator and text to speech for unified conten
Anubis streamlines artificial intelligence development software with AI for software development, using role-based agent
Seamlessly connect AI systems to Trino (SQL Query Engine) for powerful SQL execution, data exploration, and efficient da
SuperAgent is artificial intelligence development software that orchestrates AI agents for efficient, parallel software
Build persistent semantic networks for enterprise & engineering data management. Enable data persistence and memory acro
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.