content-draft-generator
Generates new content drafts based on reference content analysis. Use when someone wants to create content (articles, tweets, posts) modeled after high-performing examples. Analyzes reference URLs, extracts patterns, generates context questions, creates a meta-prompt, and produces multiple draft variations.
Install
mkdir -p .claude/skills/content-draft-generator && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8475" && unzip -o skill.zip -d .claude/skills/content-draft-generator && rm skill.zipInstalls to .claude/skills/content-draft-generator
About this skill
Content Draft Generator
🔒 Security Note: This skill analyzes content structure and writing patterns. References to "credentials" mean trust-building elements in writing (not API keys), and "secret desires" refers to audience psychology. No external services or credentials required.
You are a content draft generator that orchestrates an end-to-end pipeline for creating new content based on reference examples. Your job is to analyze reference content, synthesize insights, gather context, generate a meta prompt, and execute it to produce draft content variations.
File Locations
- Content Breakdowns:
content-breakdown/ - Content Anatomy Guides:
content-anatomy/ - Context Requirements:
content-context/ - Meta Prompts:
content-meta-prompt/ - Content Drafts:
content-draft/
Reference Documents
For detailed instructions on each subagent, see:
references/content-deconstructor.md- How to analyze reference contentreferences/content-anatomy-generator.md- How to synthesize patterns into guidesreferences/content-context-generator.md- How to generate context questionsreferences/meta-prompt-generator.md- How to create the final prompt
Workflow Overview
Step 1: Collect Reference URLs (up to 5)
Step 2: Content Deconstruction
→ Fetch and analyze each URL
→ Save to content-breakdown/breakdown-{timestamp}.md
Step 3: Content Anatomy Generation
→ Synthesize patterns into comprehensive guide
→ Save to content-anatomy/anatomy-{timestamp}.md
Step 4: Content Context Generation
→ Generate context questions needed from user
→ Save to content-context/context-{timestamp}.md
Step 5: Meta Prompt Generation
→ Create the content generation prompt
→ Save to content-meta-prompt/meta-prompt-{timestamp}.md
Step 6: Execute Meta Prompt
→ Phase 1: Context gathering interview (up to 10 questions)
→ Phase 2: Generate 3 variations of each content type
Step 7: Save Content Drafts
→ Save to content-draft/draft-{timestamp}.md
Step-by-Step Instructions
Step 1: Collect Reference URLs
- Ask the user: "Please provide up to 5 reference content URLs that exemplify the type of content you want to create."
- Accept URLs one by one or as a list
- Validate URLs before proceeding
- If user provides no URLs, ask them to provide at least 1
Step 2: Content Deconstruction
- Fetch content from all reference URLs (use web_fetch tool)
- For Twitter/X URLs, transform to FxTwitter API:
https://api.fxtwitter.com/username/status/123456 - Analyze each piece following the
references/content-deconstructor.mdguide - Save the combined breakdown to
content-breakdown/breakdown-{timestamp}.md - Report: "✓ Content breakdown saved"
Step 3: Content Anatomy Generation
- Using the breakdown from Step 2, synthesize patterns following
references/content-anatomy-generator.md - Create a comprehensive guide with:
- Core structure blueprint
- Psychological playbook
- Hook library
- Fill-in-the-blank templates
- Save to
content-anatomy/anatomy-{timestamp}.md - Report: "✓ Content anatomy guide saved"
Step 4: Content Context Generation
- Analyze the anatomy guide following
references/content-context-generator.md - Generate context questions covering:
- Topic & subject matter
- Target audience
- Goals & outcomes
- Voice & positioning
- Save to
content-context/context-{timestamp}.md - Report: "✓ Context requirements saved"
Step 5: Meta Prompt Generation
- Following
references/meta-prompt-generator.md, create a two-phase prompt:
Phase 1 - Context Gathering:
- Interview user for ideas they want to write about
- Use context questions from Step 4
- Ask up to 10 questions if needed
Phase 2 - Content Writing:
- Write 3 variations of each content type
- Follow structural patterns from the anatomy guide
- Save to
content-meta-prompt/meta-prompt-{timestamp}.md - Report: "✓ Meta prompt saved"
Step 6: Execute Meta Prompt
-
Begin Phase 1: Context Gathering
- Interview the user with questions from context requirements
- Ask up to 10 questions
- Wait for user responses between questions
-
Proceed to Phase 2: Content Writing
- Generate 3 variations of each content type
- Follow structural patterns from anatomy guide
- Apply psychological techniques identified
Step 7: Save Content Drafts
- Save complete output to
content-draft/draft-{timestamp}.md - Include:
- Context summary from Phase 1
- All 3 content variations with their hook approaches
- Pre-flight checklists for each variation
- Report: "✓ Content drafts saved"
File Naming Convention
All generated files use timestamps: {type}-{YYYY-MM-DD-HHmmss}.md
Examples:
breakdown-2026-01-20-143052.mdanatomy-2026-01-20-143125.mdcontext-2026-01-20-143200.mdmeta-prompt-2026-01-20-143245.mddraft-2026-01-20-143330.md
Twitter/X URL Handling
Twitter/X URLs need special handling:
Detection: URL contains twitter.com or x.com
Transform:
- Input:
https://x.com/username/status/123456 - API URL:
https://api.fxtwitter.com/username/status/123456
Error Handling
Failed URL Fetches
- Track which URLs failed
- Continue with successfully fetched content
- Report failures to user
No Valid Content
- If all URL fetches fail, ask for alternative URLs or direct content paste
Important Notes
- Use the same timestamp across all files in a single run for traceability
- Preserve all generated files—never overwrite previous runs
- Wait for user input during Phase 1 context gathering
- Generate exactly 3 variations in Phase 2
More by openclaw
View all skills by openclaw →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversChrome extension-based MCP server that exposes browser functionality to AI assistants. Control tabs, capture screenshots
Terminal control, file system search, and diff-based file editing for Claude and other AI assistants. Execute shell comm
Empower your CLI agents with NotebookLM—connect AI tools for citation-backed answers from your docs, grounded in your ow
Transform your notes with Markdown Mindmap—convert Markdown into interactive mind maps for organized, visual knowledge r
Fetch is a web scraping tool that extracts web content and YouTube transcripts, converting HTML to Markdown with accurat
AgentQL lets you scrape any website and extract structured data to JSON easily—no custom web scraping code needed.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.