code-quality-fix-all

0
0
Source

Fix code quality issues identified in a code quality review stored in agent_artefacts/code_quality/<topic>/. Systematically addresses issues found by the code-quality-review-all skill for ANY code quality topic, with validation and testing at each step. Use when user asks to fix issues from a code quality review, or asks to fix issues from agent_artefacts/code_quality/<topic>.

Install

mkdir -p .claude/skills/code-quality-fix-all && curl -L -o skill.zip "https://mcp.directory/api/skills/download/9487" && unzip -o skill.zip -d .claude/skills/code-quality-fix-all && rm skill.zip

Installs to .claude/skills/code-quality-fix-all

About this skill

Code Quality Fix All

Fix code quality issues identified in a code quality review. This skill systematically addresses issues found by the code-quality-review-all skill for ANY code quality topic, with validation and testing at each step.

Expected Arguments

When invoked, this skill expects the path to a code quality topic as an argument (e.g., agent_artefacts/code_quality/private_api_imports).

If not provided, the skill will ask the user for the topic path. Within the topic path, there are several files:

  • README.md - contains description of the issue and examples of how to fix it
  • results.json - contains list of all identified issues
  • SUMMARY.md - contains summary of the identified issues

Filters and options are specified interactively after the skill starts by using the AskUserQuestion tool to present options unless specified otherwise in arguments.

  • Which issue types to target:
    • all
    • specific types
    • Fix complexity level (easy only, medium and below, or all)
  • Which evaluations to fix (all, specific ones, evaluations with small number of issues)
  • Maximum number of issues to fix in this run

Workflow

Phase 1: Understanding the Topic and Planning

  1. Read topic documentation

    • Read the README.md to understand:
      • What code quality issue this topic addresses
      • Why it matters (stability, maintainability, etc.)
      • How to detect the issue
      • How to fix the issue (fix patterns, examples)
    • Read results.json to get all identified issues
    • Identify which issues are in scope based on arguments
  2. Analyze and categorize issues

    • Analyze fix complexity based on:
      • issue_description
      • suggested_fix from results.json
      • Fix examples in README.md
    • Classify as:
      • Easy: Single-line changes, clear fix pattern in README
      • Medium: Multi-line changes, well-documented fix approach
      • Hard: No clear fix pattern, requires research or copying code
    • Group issues by evaluation and issue type
    • Generate statistics for presenting to user
  3. Ask user for filtering preferences

    • Use AskUserQuestion tool to ask:
      • Which evaluations to fix? (all / specific ones / most affected)
      • Which issue types to target? (all / specific types)
      • Fix complexity level? (easy only / easy+medium / all)
      • Max issues per run? (all / limit to specific number)
    • Apply filters based on user responses
    • Present filtered plan with:
      • Number of issues to fix
      • Breakdown by evaluation and issue type
      • Complexity distribution
      • Ask for final confirmation to proceed
  4. Validate understanding of fixes

    • For each unique issue type in scope:
      • Check if README.md documents how to fix it
      • Look for "Good Examples" and "Bad Examples" sections
      • Check "suggested_fix" field in results.json
    • If fix approach is unclear for any issue type:
      • Research the correct approach
      • Update <topic>/README.md with findings
      • Ask user for guidance if still uncertain

Phase 2: Pre-Fix Validation

For each issue to be fixed:

  1. Read and understand context

    • Read the entire file containing the issue (not just the line)
    • Understand how the problematic code is used
    • Look for related issues in the same file
    • Check for patterns that might affect the fix (e.g., multiple occurrences)
    • Identify any cascading changes needed (related imports, type hints, etc.)
  2. Validate the suggested fix

    • Review the "suggested_fix" from results.json
    • Check against fix patterns in README.md
    • Verify the fix won't break functionality
    • For complex fixes:
      • Check if dependencies/alternatives actually exist
      • Validate that replacement code follows same patterns
      • Consider edge cases
  3. Estimate change scope

    • Count how many lines will change for this fix
    • Identify if cascading changes are needed
    • Determine if multiple files need updating
    • If changes exceed 100 lines for a single issue:
      • Alert user with:
        • Issue details
        • Why the change is large
        • What will change
      • Get explicit approval before proceeding

Phase 3: Applying Fixes

  1. Apply fixes systematically
  • Create a new branch to apply fixes to, with a name like agent/<short_description_of_issue>
  • Process one evaluation at a time
  • Within each evaluation, group by issue type
  • For each fix:
    • Use Edit tool to apply the change
    • Follow the suggested_fix guidance
    • Apply fix patterns from README.md
    • Handle related issues in same file together
    • Add comments if the fix requires it (e.g., copied code attribution)
  • Track what was fixed
  1. Verify changes compile/parse

    • After fixing each file, validate:
      • File is syntactically valid (Python can parse it)
      • No obvious import errors introduced
      • Code follows repository patterns
    • If validation fails:
      • Investigate the issue
      • Attempt to fix validation error
      • Rollback change if cannot be resolved
  2. Track progress

    • Maintain list of:
      • Issues successfully fixed (file, line, issue type)
      • Issues that couldn't be fixed (with reasons)
      • Evaluations that have been modified
      • Files that were changed

Phase 4: Testing and Validation

  1. Run linting

    • Run repository's linter on modified files (ruff, flake8, mypy, etc.)
    • Check for:
      • Import errors
      • Type checking errors
      • Style violations introduced
    • Fix any linting issues that result from changes
    • If linting issues can't be fixed, document them
  2. Run unit tests

    • Identify test files for each modified evaluation

    • Run unit tests for affected evaluations using pytest:

      Basic test commands:

      # Install relevant packages in the event of import failure
      uv sync --extra test
      
      # Run tests for a specific evaluation
      uv run pytest tests/<evaluation_name>/
      
      # Run a specific test file
      uv run pytest tests/test_file.py
      
      # Run a specific test
      uv run pytest tests/test_file.py::TestClass::test_method
      
      # Run slow tests (excluded by default)
      uv run pytest --runslow tests/
      
      # Skip dataset download tests
      uv run pytest -m 'not dataset_download' tests/
      
      # Run only slow tests
      uv run pytest -m slow tests/
      

      Test markers to be aware of:

      • @pytest.mark.slow - Tests taking >10 seconds

      • @pytest.mark.dataset_download - Tests that download datasets

      • @pytest.mark.docker - Tests using Docker

      • @pytest.mark.huggingface - HuggingFace-related tests

      • Focus on tests for the specific evaluation

      • Look for test failures or errors

    • IMPORTANT: Do NOT run full evaluations (they take too long) unless user explicitly requests it

  3. Handle test failures

    • For each test failure:
      • Read test output carefully
      • Determine if failure is caused by the fix
      • Check if it's a pre-existing failure
    • If caused by fix:
      • Try to adjust the fix to make tests pass
      • If cannot be resolved, rollback the change
      • Document the issue for user review
    • If pre-existing:
      • Note it but don't block on it
      • Inform user

Phase 5: Re-Review and Handle Remaining Issues

  1. Update results.json with fix status

    • For each issue that was fixed, add "fix_status" field after "suggested_fix":

      {
       ...
        "suggested_fix": "...",
        "fix_status": "fixed - please review"
      }
      
    • For issues that couldn't be fixed, add explanation:

      "fix_status": "not fixed - reason: ..."
      
    • IMPORTANT: Do NOT remove any entries from results.json - only add/update "fix_status"

    • The code-quality-review-all skill owns results.json and is responsible for removing entries

  2. Re-run code quality review

    • IMPORTANT: Use Task tool to spawn subagent running code-quality-review-all skill
    • Pass the same topic path
    • This will update results.json with current state
    • Compare results before and after to identify:
      • Issues that are now resolved (no longer appear)
      • New issues that may have been introduced
      • Issues that still remain despite fix attempts
  3. Fix remaining issues if in scope

    • For each new or remaining in-scope issue:
      • Investigate why previous fix didn't work
      • Attempt alternative fix approach
      • Update "fix_status" with attempt results
    • Repeat this process until no more in-scope issues can be fixed
  4. Update topic's README.md

    • Add any knowledge that you have discovered that will be useful in detecting or fixing topic-related issues in the future
    • Do not remove examples of bad code or patterns that were fixed - they will be useful in future reviews and fixes of future evaluations.
  5. Update SUMMARY.md

    • Add a "Recent Fixes" section with:
      • Date of fix run
      • Number of issues fixed
      • Which evaluations were updated
    • Keep historical data (don't remove past information)
    • Update recommendations to reflect remaining work
  6. Run markdown linters

    • Use uv run pre-commit run markdownlint-fix to fix markdown linting issues

Phase 6: Create PR Description and Present Results

  1. Create/Update PR description (cumulative)
    • Read existing PR_DESCRIPTION.md if it exists (from previous runs)
    • Cumulative tracking: PR description represents ALL changes from branch base, not just this run
    • If PR_DESCRIPTION.md exists:
      • Parse existing content to extract previous runs' data
      • Append information from this run
      • Update cumulative statistics
    • If PR_DESCRIPTION.md doesn't exist (first run):
      • Create new file
    • Format for GitHub/GitLab pull request with:
      • Summary: Brief overview of the code quality topic and total fixes (2-3 sentences)
      • **Overal

Content truncated.

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,4211,305

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

1,2361,030

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

9151,017

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

975666

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

979609

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

1,046505

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.