code-quality-fix-all
Fix code quality issues identified in a code quality review stored in agent_artefacts/code_quality/<topic>/. Systematically addresses issues found by the code-quality-review-all skill for ANY code quality topic, with validation and testing at each step. Use when user asks to fix issues from a code quality review, or asks to fix issues from agent_artefacts/code_quality/<topic>.
Install
mkdir -p .claude/skills/code-quality-fix-all && curl -L -o skill.zip "https://mcp.directory/api/skills/download/9487" && unzip -o skill.zip -d .claude/skills/code-quality-fix-all && rm skill.zipInstalls to .claude/skills/code-quality-fix-all
About this skill
Code Quality Fix All
Fix code quality issues identified in a code quality review. This skill systematically addresses issues found by the code-quality-review-all skill for ANY code quality topic, with validation and testing at each step.
Expected Arguments
When invoked, this skill expects the path to a code quality topic as an argument (e.g., agent_artefacts/code_quality/private_api_imports).
If not provided, the skill will ask the user for the topic path. Within the topic path, there are several files:
- README.md - contains description of the issue and examples of how to fix it
- results.json - contains list of all identified issues
- SUMMARY.md - contains summary of the identified issues
Filters and options are specified interactively after the skill starts by using the AskUserQuestion tool to present options unless specified otherwise in arguments.
- Which issue types to target:
- all
- specific types
- Fix complexity level (easy only, medium and below, or all)
- Which evaluations to fix (all, specific ones, evaluations with small number of issues)
- Maximum number of issues to fix in this run
Workflow
Phase 1: Understanding the Topic and Planning
-
Read topic documentation
- Read the README.md to understand:
- What code quality issue this topic addresses
- Why it matters (stability, maintainability, etc.)
- How to detect the issue
- How to fix the issue (fix patterns, examples)
- Read results.json to get all identified issues
- Identify which issues are in scope based on arguments
- Read the README.md to understand:
-
Analyze and categorize issues
- Analyze fix complexity based on:
- issue_description
- suggested_fix from results.json
- Fix examples in README.md
- Classify as:
- Easy: Single-line changes, clear fix pattern in README
- Medium: Multi-line changes, well-documented fix approach
- Hard: No clear fix pattern, requires research or copying code
- Group issues by evaluation and issue type
- Generate statistics for presenting to user
- Analyze fix complexity based on:
-
Ask user for filtering preferences
- Use
AskUserQuestiontool to ask:- Which evaluations to fix? (all / specific ones / most affected)
- Which issue types to target? (all / specific types)
- Fix complexity level? (easy only / easy+medium / all)
- Max issues per run? (all / limit to specific number)
- Apply filters based on user responses
- Present filtered plan with:
- Number of issues to fix
- Breakdown by evaluation and issue type
- Complexity distribution
- Ask for final confirmation to proceed
- Use
-
Validate understanding of fixes
- For each unique issue type in scope:
- Check if README.md documents how to fix it
- Look for "Good Examples" and "Bad Examples" sections
- Check "suggested_fix" field in results.json
- If fix approach is unclear for any issue type:
- Research the correct approach
- Update
<topic>/README.mdwith findings - Ask user for guidance if still uncertain
- For each unique issue type in scope:
Phase 2: Pre-Fix Validation
For each issue to be fixed:
-
Read and understand context
- Read the entire file containing the issue (not just the line)
- Understand how the problematic code is used
- Look for related issues in the same file
- Check for patterns that might affect the fix (e.g., multiple occurrences)
- Identify any cascading changes needed (related imports, type hints, etc.)
-
Validate the suggested fix
- Review the "suggested_fix" from results.json
- Check against fix patterns in README.md
- Verify the fix won't break functionality
- For complex fixes:
- Check if dependencies/alternatives actually exist
- Validate that replacement code follows same patterns
- Consider edge cases
-
Estimate change scope
- Count how many lines will change for this fix
- Identify if cascading changes are needed
- Determine if multiple files need updating
- If changes exceed 100 lines for a single issue:
- Alert user with:
- Issue details
- Why the change is large
- What will change
- Get explicit approval before proceeding
- Alert user with:
Phase 3: Applying Fixes
- Apply fixes systematically
- Create a new branch to apply fixes to, with a name like agent/<short_description_of_issue>
- Process one evaluation at a time
- Within each evaluation, group by issue type
- For each fix:
- Use Edit tool to apply the change
- Follow the suggested_fix guidance
- Apply fix patterns from README.md
- Handle related issues in same file together
- Add comments if the fix requires it (e.g., copied code attribution)
- Track what was fixed
-
Verify changes compile/parse
- After fixing each file, validate:
- File is syntactically valid (Python can parse it)
- No obvious import errors introduced
- Code follows repository patterns
- If validation fails:
- Investigate the issue
- Attempt to fix validation error
- Rollback change if cannot be resolved
- After fixing each file, validate:
-
Track progress
- Maintain list of:
- Issues successfully fixed (file, line, issue type)
- Issues that couldn't be fixed (with reasons)
- Evaluations that have been modified
- Files that were changed
- Maintain list of:
Phase 4: Testing and Validation
-
Run linting
- Run repository's linter on modified files (ruff, flake8, mypy, etc.)
- Check for:
- Import errors
- Type checking errors
- Style violations introduced
- Fix any linting issues that result from changes
- If linting issues can't be fixed, document them
-
Run unit tests
-
Identify test files for each modified evaluation
-
Run unit tests for affected evaluations using pytest:
Basic test commands:
# Install relevant packages in the event of import failure uv sync --extra test # Run tests for a specific evaluation uv run pytest tests/<evaluation_name>/ # Run a specific test file uv run pytest tests/test_file.py # Run a specific test uv run pytest tests/test_file.py::TestClass::test_method # Run slow tests (excluded by default) uv run pytest --runslow tests/ # Skip dataset download tests uv run pytest -m 'not dataset_download' tests/ # Run only slow tests uv run pytest -m slow tests/Test markers to be aware of:
-
@pytest.mark.slow- Tests taking >10 seconds -
@pytest.mark.dataset_download- Tests that download datasets -
@pytest.mark.docker- Tests using Docker -
@pytest.mark.huggingface- HuggingFace-related tests -
Focus on tests for the specific evaluation
-
Look for test failures or errors
-
-
IMPORTANT: Do NOT run full evaluations (they take too long) unless user explicitly requests it
-
-
Handle test failures
- For each test failure:
- Read test output carefully
- Determine if failure is caused by the fix
- Check if it's a pre-existing failure
- If caused by fix:
- Try to adjust the fix to make tests pass
- If cannot be resolved, rollback the change
- Document the issue for user review
- If pre-existing:
- Note it but don't block on it
- Inform user
- For each test failure:
Phase 5: Re-Review and Handle Remaining Issues
-
Update results.json with fix status
-
For each issue that was fixed, add
"fix_status"field after"suggested_fix":{ ... "suggested_fix": "...", "fix_status": "fixed - please review" } -
For issues that couldn't be fixed, add explanation:
"fix_status": "not fixed - reason: ..." -
IMPORTANT: Do NOT remove any entries from results.json - only add/update "fix_status"
-
The code-quality-review-all skill owns results.json and is responsible for removing entries
-
-
Re-run code quality review
- IMPORTANT: Use Task tool to spawn subagent running code-quality-review-all skill
- Pass the same topic path
- This will update results.json with current state
- Compare results before and after to identify:
- Issues that are now resolved (no longer appear)
- New issues that may have been introduced
- Issues that still remain despite fix attempts
-
Fix remaining issues if in scope
- For each new or remaining in-scope issue:
- Investigate why previous fix didn't work
- Attempt alternative fix approach
- Update "fix_status" with attempt results
- Repeat this process until no more in-scope issues can be fixed
- For each new or remaining in-scope issue:
-
Update topic's README.md
- Add any knowledge that you have discovered that will be useful in detecting or fixing topic-related issues in the future
- Do not remove examples of bad code or patterns that were fixed - they will be useful in future reviews and fixes of future evaluations.
-
Update SUMMARY.md
- Add a "Recent Fixes" section with:
- Date of fix run
- Number of issues fixed
- Which evaluations were updated
- Keep historical data (don't remove past information)
- Update recommendations to reflect remaining work
- Add a "Recent Fixes" section with:
-
Run markdown linters
- Use
uv run pre-commit run markdownlint-fixto fix markdown linting issues
- Use
Phase 6: Create PR Description and Present Results
- Create/Update PR description (cumulative)
- Read existing
PR_DESCRIPTION.mdif it exists (from previous runs) - Cumulative tracking: PR description represents ALL changes from branch base, not just this run
- If PR_DESCRIPTION.md exists:
- Parse existing content to extract previous runs' data
- Append information from this run
- Update cumulative statistics
- If PR_DESCRIPTION.md doesn't exist (first run):
- Create new file
- Format for GitHub/GitLab pull request with:
- Summary: Brief overview of the code quality topic and total fixes (2-3 sentences)
- **Overal
- Read existing
Content truncated.
More by UKGovernmentBEIS
View all skills by UKGovernmentBEIS →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversIntegrate DeepSource's static code analysis tools for real-time project metrics, issue tracking, and code quality insigh
Scan your website for viruses and vulnerabilities with Code Audit (Ollama). Get a comprehensive site scanner virus check
Unlock AI-ready web data with Firecrawl: scrape any website, handle dynamic content, and automate web scraping for resea
Extend your developer tools with GitHub MCP Server for advanced automation, supporting GitHub Student and student packag
Unlock seamless Figma to code: streamline Figma to HTML with Framelink MCP Server for fast, accurate design-to-code work
Access Confluence pages and Jira in the cloud with Atlassian API. Integrate effortlessly using the REST API for Jira.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.