prompt-mastery
Advanced LLM prompt engineering expertise for crafting highly effective prompts, system messages, and tool descriptions with Claude-specific techniques
Install
mkdir -p .claude/skills/prompt-mastery && curl -L -o skill.zip "https://mcp.directory/api/skills/download/3882" && unzip -o skill.zip -d .claude/skills/prompt-mastery && rm skill.zipInstalls to .claude/skills/prompt-mastery
About this skill
Prompt Mastery: Advanced LLM Prompt Engineering Skill
Purpose: This skill provides comprehensive prompt engineering expertise, distilling cutting-edge research, best practices, and battle-tested techniques for crafting highly effective prompts for Large Language Models, with special emphasis on Claude.
When to invoke: Use this skill when creating system prompts, tool descriptions, agent instructions, or any situation requiring sophisticated prompt design. Invoke before writing any significant prompts to leverage advanced techniques.
🎯 Core Principles
The Fundamentals (Never Violate These)
- Clarity Over Cleverness: Clear, explicit instructions consistently outperform clever wordplay
- Specificity Beats Vagueness: Precise requirements produce better results than general requests
- Context is Currency: Provide enough background for the model to understand the task deeply
- Structure Reduces Ambiguity: Well-structured prompts eliminate confusion
- Iteration is Essential: First prompts rarely achieve perfection—treat prompting as iterative
The 2025 Paradigm Shift
Modern prompt engineering isn't about asking questions—it's about designing the question space that guides models toward accurate, relevant, and actionable outputs. Context engineering enables you to shape not just what you ask, but how the model interprets and responds.
🔷 Claude-Specific Techniques
Official Anthropic Best Practices (2025)
1. Be Clear and Explicit
Claude 4 models (Sonnet 4.5, Sonnet 4, Haiku 4.5, Opus 4.1, Opus 4) respond best to clear, explicit instructions. If you want "above and beyond" behavior, explicitly request it—don't assume the model will infer it.
<!-- GOOD: Explicit and clear -->
<instruction>
Analyze this code for security vulnerabilities. Provide:
1. A severity rating (Critical/High/Medium/Low)
2. Specific line numbers where issues occur
3. Concrete remediation steps with code examples
4. Explanation of the attack vector
</instruction>
<!-- BAD: Vague and implicit -->
<instruction>
Look at this code and tell me if there are any problems.
</instruction>
2. Provide Context and Motivation
Explaining why a behavior is important helps Claude understand your goals and deliver more targeted responses.
<context>
We're building a medical diagnosis system where accuracy is critical.
False positives can cause unnecessary anxiety, while false negatives
could delay treatment. We need responses that:
- Cite specific medical literature when making claims
- Express uncertainty appropriately
- Never provide definitive diagnoses (legal requirement)
</context>
3. Use XML Tags for Structure
Claude was trained with XML tags, so they provide exceptional control over output structure and interpretation.
<task>
Analyze the following customer feedback and extract insights.
</task>
<feedback>
[Customer feedback text here]
</feedback>
<output_format>
- sentiment: positive/negative/neutral
- key_themes: list of main topics
- action_items: specific recommendations
- urgency: high/medium/low
</output_format>
<guidelines>
- Focus on actionable insights
- Distinguish between individual complaints and systemic issues
- Flag any mentions of competitors
</guidelines>
4. Leverage Thinking Capabilities
Claude 4 offers thinking capabilities for reflection after tool use or complex multi-step reasoning. Guide initial or interleaved thinking for better results.
<thinking_instructions>
Before answering:
1. Identify what type of problem this is (optimization, debugging, architecture)
2. Consider edge cases that might break the obvious solution
3. Evaluate tradeoffs between performance, maintainability, and complexity
4. Only then propose your approach
</thinking_instructions>
5. Instruction Placement Matters
Claude follows instructions in human messages (user prompts) better than those in the system message. Place critical requirements in the user message.
# BETTER: Critical instructions in user message
messages = [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": """
Write a Python function to calculate Fibonacci numbers.
CRITICAL REQUIREMENTS:
- Use memoization for efficiency
- Include type hints
- Add comprehensive docstrings
- Handle edge cases (n=0, n=1, negative n)
"""}
]
# WORSE: Critical instructions only in system message
messages = [
{"role": "system", "content": """
You are a helpful coding assistant.
Always use memoization, type hints, docstrings, and handle edge cases.
"""},
{"role": "user", "content": "Write a Python function to calculate Fibonacci numbers."}
]
6. Prefill Claude's Response (Powerful Technique!)
Guide Claude by starting the Assistant message with desired initial text. This skips preambles, enforces formats, and increases control.
# Force JSON output by prefilling
messages = [
{"role": "user", "content": "Extract key entities from: 'Apple announced the iPhone 15 in Cupertino on Sept 12.'"},
{"role": "assistant", "content": "{"} # Prefill starts JSON
]
# Claude continues: "entities": [{"name": "Apple", "type": "company"}, ...]
Use cases for prefilling:
- Skip preambles: Start with your desired first word
- Enforce JSON/XML: Begin with
{or< - Maintain character in roleplay: Start with character voice
- Control tone: Begin with desired emotional register
Limitation: Prefilling doesn't work with extended thinking mode.
🚀 Advanced Prompting Techniques
Chain-of-Thought (CoT) Prompting
CoT enhances reasoning by incorporating logical steps within the prompt, making models more adept at complex tasks.
<example_problem>
Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
Reasoning:
1. Roger started with 5 balls
2. He bought 2 cans, each with 3 balls
3. New balls = 2 cans × 3 balls/can = 6 balls
4. Total = 5 + 6 = 11 balls
Answer: 11 tennis balls
</example_problem>
<your_problem>
[Insert actual problem here]
</your_problem>
<instruction>
Solve the problem above using the same step-by-step reasoning approach.
</instruction>
Key insight: Combine CoT with few-shot prompting for complex tasks requiring thoughtful reasoning. Provide 2-3 examples showing the reasoning process explicitly.
Zero-Shot CoT: Simply add "Let's think step by step" to prompts—effective but usually less powerful than few-shot CoT.
Few-Shot Prompting
Enable in-context learning by providing demonstrations that condition the model for subsequent examples.
<examples>
<example>
<input>The product arrived damaged and customer service was unhelpful.</input>
<output>
sentiment: negative
issues: [product_quality, customer_service]
priority: high
</output>
</example>
<example>
<input>Love the new features! The interface is so intuitive.</input>
<output>
sentiment: positive
issues: []
priority: low
</output>
</example>
<example>
<input>The app crashes every time I try to export data.</input>
<output>
sentiment: negative
issues: [software_bug]
priority: critical
</output>
</example>
</examples>
<new_input>
[Your actual input here]
</new_input>
Best practices:
- 2-5 examples optimal (more isn't always better)
- Examples should be representative of edge cases
- Maintain consistent format across examples
- Order examples from simple to complex
Meta-Prompting & Self-Reflection
Use LLMs to create and refine prompts iteratively.
<meta_prompt>
I need to create a prompt for [TASK DESCRIPTION].
Analyze this task and generate an optimized prompt that:
1. Uses appropriate structure (XML tags, sections, etc.)
2. Includes relevant examples if needed
3. Specifies output format clearly
4. Anticipates edge cases
5. Includes quality criteria
After generating the prompt, critique it and identify potential weaknesses,
then provide an improved version addressing those weaknesses.
</meta_prompt>
Reflexion Framework: An iterative approach where:
- Actor generates initial output
- Evaluator scores the output
- Self-Reflection generates verbal feedback for improvement
- Actor regenerates using self-reflection insights
Research shows this can significantly improve performance on decision-making, reasoning, and coding tasks.
Prompt Scaffolding
Wrap user inputs in structured, guarded templates that limit misbehavior—defensive prompting that controls how the model thinks and responds.
<system_guardrails>
You are an AI assistant bound by these constraints:
- Never provide medical diagnoses
- Decline requests for illegal activities
- Express uncertainty when appropriate
- Cite sources when making factual claims
</system_guardrails>
<user_input>
{{USER_QUERY_HERE}}
</user_input>
<response_requirements>
- If request violates guardrails, politely decline and explain why
- If uncertain, say so explicitly
- If task is ambiguous, ask clarifying questions
</response_requirements>
📐 Structured Prompting
XML Prompting (Claude's Native Format)
XML tags are Claude's preferred structure—use them liberally for clarity and control.
Benefits:
- Improved clarity: Separates prompt components
- Reduced ambiguity: Explicit boundaries
- Enhanced consistency: Structured inputs → structured outputs
- Better parseability: Easy to extract specific response parts
Best Practices:
- Use descriptive tag names:
<instruction>,<context>,<example>,<output_format> - Be consistent: Same tag names throughout prompts
- Nest for hierarchy:
<examples><example>...</example></examples> - Reference tags in instructions: "Respond inside
<analysis>tags"
<task>
<objective>Analyze code for security vulnerabilities</objective>
<context>
This is
---
*Content truncated.*
More by taylorsatula
View all skills by taylorsatula →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversHoutini LM delivers advanced prompt engineering with 35+ functions for code analysis, generation, security audits, and d
Unlock AI-ready web data with Firecrawl: scrape any website, handle dynamic content, and automate web scraping for resea
Consult LLM escalates complex reasoning tasks to advanced models with code context, git diffs, and detailed cost trackin
Enhance prompt engineering for ChatGPT with ChuckNorris, fetching top prompts for LLMs. Boost prompts engineering for re
Integrate with Microsoft Dataverse to access entities, attributes, and relationships for advanced data modeling and expl
Extract plot data from images with Axiomatic AI. Use advanced web plot digitizer features for scientific imaging and ana
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.