prompt-analysis
Analyze AI prompting patterns and acceptance rates
Install
mkdir -p .claude/skills/prompt-analysis && curl -L -o skill.zip "https://mcp.directory/api/skills/download/1838" && unzip -o skill.zip -d .claude/skills/prompt-analysis && rm skill.zipInstalls to .claude/skills/prompt-analysis
About this skill
Prompt Analysis Skill
Analyze AI prompting patterns using the local prompts.db SQLite database.
What is Git AI?
Git AI is a tool that tracks AI-generated code and prompts in git. It stores:
- Every AI conversation (prompts and responses)
- Which lines of code came from AI vs human edits
- Acceptance rates (how much AI code was kept vs modified)
- Associated commits and authors
This skill queries that data to help users understand their AI coding patterns.
Initialization
First, determine scope from the user's question:
| User mentions | Flags to use |
|---|---|
| "my prompts" or nothing specified | (default - current user, current repo) |
| "team", "everyone", "all authors" | --all-authors |
| "all projects", "all repos" | --all-repositories |
| specific person's name | --author "<name>" |
| specific time range | --since <days> (default: 30) |
Run initialization:
git-ai prompts [flags]
This creates/updates prompts.db in the current directory.
Schema Reference
The prompts table contains:
seq_id- Auto-increment ID for iterationid- Unique prompt identifiertool- Tool used (e.g., "claude-code", "cursor")model- Model name (e.g., "claude-sonnet-4-20250514")human_author- Git user who created the promptcommit_sha- Associated commit (if any)total_additions,total_deletions- Lines of code changedaccepted_lines,overridden_lines- Lines kept vs modified by humanaccepted_rate- Ratio: accepted / (accepted + overridden)messages- JSON array of the conversationstart_time,last_time- Unix timestamps
Analysis Approaches
For aggregate questions (metrics, comparisons)
Use direct SQL queries:
git-ai prompts exec "SELECT model, AVG(accepted_rate), COUNT(*) FROM prompts GROUP BY model"
For per-prompt analysis (categorization, content analysis)
When questions require examining each prompt's content (messages JSON), use subagents:
- Add analysis columns to the schema:
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN work_type TEXT"
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN analysis_notes TEXT"
- Reset the iteration pointer:
git-ai prompts reset
-
Iterate with subagents - Launch parallel subagents using Task tool with
subagent_type: "general-purpose". Each subagent:- Runs
git-ai prompts nextto get one prompt as JSON - Analyzes the
messagescontent - Updates the database:
git-ai prompts exec "UPDATE prompts SET work_type='...' WHERE id='...'"
- Runs
-
Final synthesis - Query the enriched data:
git-ai prompts exec "SELECT work_type, COUNT(*), AVG(accepted_rate) FROM prompts GROUP BY work_type"
Subagent Pattern for Iteration
When processing prompts individually, spawn multiple subagents in parallel. Each subagent prompt should include:
Run `git-ai prompts next` to get the next prompt.
Analyze the messages JSON to determine: [specific analysis task]
Then update the database:
git-ai prompts exec "UPDATE prompts SET [column]='[value]' WHERE id='[prompt_id]'"
Return your analysis result.
Spawn 3-5 subagents at a time, check results, spawn more until git-ai prompts next returns "No more prompts."
IMPORTANT: The git-ai prompts next command returns ALL the data needed for analysis as JSON, including:
- The full
messagesarray with the complete conversation (human prompts and AI responses) - Metadata like
model,tool,accepted_rate,accepted_lines, etc.
Subagents should NOT run additional commands like git show or git log - everything needed is in the JSON output from git-ai prompts next. Instruct subagents explicitly:
IMPORTANT: All data you need is in the JSON output from `git-ai prompts next`.
Do NOT run git commands. Analyze the `messages` field in the JSON directly.
Iterator Examples
Example 1: Categorize prompts by work type
User asks: "Categorize my prompts by work type (bug fix, feature, refactor, docs)"
Setup:
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN work_type TEXT"
git-ai prompts reset
Subagent prompt:
Run `git-ai prompts next` to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the `messages` field directly.
Read the messages JSON and categorize this prompt into ONE of:
- "bug_fix" - fixing broken behavior, errors, or regressions
- "feature" - adding new functionality
- "refactor" - restructuring code without changing behavior
- "docs" - documentation, comments, READMEs
- "test" - adding or modifying tests
- "config" - configuration, build, CI/CD changes
- "other" - doesn't fit above categories
Update the database:
git-ai prompts exec "UPDATE prompts SET work_type='<category>' WHERE id='<prompt_id>'"
Return: the prompt id, your categorization, and a one-sentence reason.
Synthesis query:
SELECT work_type, COUNT(*) as count,
ROUND(AVG(accepted_rate), 3) as avg_acceptance,
SUM(accepted_lines) as total_lines
FROM prompts
WHERE work_type IS NOT NULL
GROUP BY work_type
ORDER BY count DESC
Example 2: Analyze why prompts had low acceptance
User asks: "Why do some of my prompts have low acceptance rates?"
Setup:
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN low_acceptance_reason TEXT"
git-ai prompts exec "UPDATE pointers SET current_seq_id = (SELECT MIN(seq_id) - 1 FROM prompts WHERE accepted_rate < 0.5 AND accepted_rate IS NOT NULL)"
Subagent prompt:
Run `git-ai prompts next` to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the `messages` field directly.
This prompt had a low acceptance rate (human modified most of the AI's code).
Analyze the messages JSON and identify the likely reason:
- "vague_request" - the prompt was unclear or underspecified
- "wrong_approach" - AI took a fundamentally wrong approach
- "style_mismatch" - code worked but didn't match project conventions
- "partial_solution" - AI only solved part of the problem
- "overengineered" - AI added unnecessary complexity
- "context_missing" - AI lacked necessary context about the codebase
- "other" - explain briefly
Update the database:
git-ai prompts exec "UPDATE prompts SET low_acceptance_reason='<reason>' WHERE id='<prompt_id>'"
Return: prompt id, the reason, and specific evidence from the conversation.
Example 3: Identify prompts that could be turned into reusable patterns
User asks: "Which of my prompts solved problems I might face again?"
Setup:
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN reusable_pattern TEXT"
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN pattern_description TEXT"
git-ai prompts reset
Subagent prompt:
Run `git-ai prompts next` to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the `messages` field directly.
Analyze whether this prompt represents a reusable pattern worth saving:
- Look for: common coding tasks, useful abstractions, clever solutions
- Skip: one-off fixes, highly context-specific changes, trivial edits
If reusable, set reusable_pattern to a short name (e.g., "api_error_handling", "form_validation", "test_mocking")
and pattern_description to a one-sentence description of what it does.
If not reusable, set both to NULL.
git-ai prompts exec "UPDATE prompts SET reusable_pattern='<name>', pattern_description='<desc>' WHERE id='<prompt_id>'"
Return: prompt id and whether you marked it as reusable (with the pattern name if yes).
Example 4: Score prompt quality/clarity
User asks: "How clear are my prompts? Which ones could I have written better?"
Setup:
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN clarity_score INTEGER"
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN clarity_feedback TEXT"
git-ai prompts reset
Subagent prompt:
Run `git-ai prompts next` to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the `messages` field directly.
Score the HUMAN's prompt clarity from 1-5:
5 = Crystal clear: specific goal, context provided, constraints stated
4 = Good: clear intent, minor ambiguities
3 = Adequate: understandable but missing helpful context
2 = Vague: required AI to make significant assumptions
1 = Unclear: AI had to guess what was wanted
Also provide brief feedback on how the prompt could be improved.
git-ai prompts exec "UPDATE prompts SET clarity_score=<1-5>, clarity_feedback='<feedback>' WHERE id='<prompt_id>'"
Return: prompt id, score, and your feedback.
Synthesis query:
SELECT clarity_score, COUNT(*) as count,
ROUND(AVG(accepted_rate), 3) as avg_acceptance
FROM prompts
WHERE clarity_score IS NOT NULL
GROUP BY clarity_score
ORDER BY clarity_score DESC
Example 5: Correlate prompting techniques with acceptance rate
User asks: "Correlate my prompting techniques with acceptance rate"
Setup:
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN technique TEXT"
git-ai prompts exec "ALTER TABLE prompts ADD COLUMN technique_notes TEXT"
git-ai prompts reset
Subagent prompt:
Run `git-ai prompts next` to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the `messages` field directly.
Analyze the HUMAN's prompting technique in the messages. Identify which techniques were used:
- "example_driven" - provided examples of desired output or behavior
- "step_by_step" - broke down the request into steps or phases
- "context_heavy" - provided extensive background/context about the codebase
- "minimal" - terse, brief request with little context
- "iterative" - built up solution through back-and-forth refinement
- "constraint_focus
---
*Content truncated.*
You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversTON Blockchain Analyzer tracks wallet addresses, analyzes transactions, and uncovers trading patterns for smarter crypto
Securely join MySQL databases with Read MySQL for read-only query access and in-depth data analysis.
Foundry Toolkit: Deploy, test, and analyze smart contracts on EVM networks and local Anvil with powerful blockchain dev
Quickly rp prototype web apps with Scaffold Generator: create consistent scaffolding using templates, variable substitut
Leverage Chronulus AI Forecasting for predictive analytics: analyze, predict, and visualize time series data with natura
Search and analyze biomedical literature with PubMed integration. Access entre pubmed data for dynamic scientific queryi
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.