resolve-checks
Resolve all failing CI checks and address PR review feedback on the current branch's PR. Runs tests locally, fixes failures, incorporates valid review comments, and resolves addressed feedback. Use when CI is red, after receiving PR feedback, or before merging.
Install
mkdir -p .claude/skills/resolve-checks && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7086" && unzip -o skill.zip -d .claude/skills/resolve-checks && rm skill.zipInstalls to .claude/skills/resolve-checks
About this skill
Resolve Checks
Systematically resolve all failing CI checks and address PR review feedback by running tests locally first, fixing issues, incorporating valid feedback, and verifying CI passes.
When to Use
- When CI checks are failing on your PR
- When you have unresolved PR review comments
- Before attempting to merge a PR
- When you want to proactively verify all checks pass
- After making changes and before pushing
- After receiving code review feedback
Core Principle
Run tests locally first, don't wait for CI. You have access to the full test suite locally. Catching failures locally is faster than waiting for CI round-trips.
Reference Documentation
For detailed information on test types, setup files, utilities, and common failure patterns, see test-reference.md.
Process
1. Identify the PR
# Get current branch
git branch --show-current
Use GitHub MCP to find the PR:
mcp__github__list_pull_requests with state: "open" and head: "<branch-name>"
2. Run Local Test Suite
Run all tests locally before checking CI status:
cd platform/flowglad-next
# Step 1: Type checking and linting (catches most issues)
bun run check
# Step 2: Backend tests (unit + db combined)
bun run test:backend
# Step 3: Frontend tests
bun run test:frontend
# Step 4: RLS tests (run serially)
bun run test:rls
# Step 5: Integration tests (end-to-end with real APIs)
bun run test:integration
# Step 6: Behavior tests (if credentials available)
bun run test:behavior
Run these sequentially. Fix failures at each step before proceeding to the next.
Note: test:backend combines unit and db tests for convenience. You can also run them separately with test:unit and test:db.
3. Fix Local Failures
When tests fail locally:
- Read the error output carefully - Understand what's actually failing
- Reproduce the specific failure - Run the single failing test file:
bun test path/to/failing.test.ts - Fix the issue - Make the necessary code changes
- Re-run the specific test - Verify your fix works
- Re-run the full suite - Ensure no regressions
Common Failure Patterns
| Failure Type | Likely Cause |
|---|---|
| Type errors | Schema/interface mismatch |
| Lint errors | Style violations |
| Unit test failures | Logic errors, missing MSW mock |
| DB test failures | Schema changes, test data collisions |
| RLS test failures | Policy misconfiguration, parallel execution |
| Integration failures | Invalid credentials, service unavailable |
See test-reference.md for detailed failure patterns and fixes.
4. Check CI Status
After local tests pass, check CI:
mcp__github__get_pull_request_status with owner, repo, and pull_number
Review each check's status. For failing checks:
- Get the failure details from GitHub
- Compare with local results - Did it pass locally?
- If CI-only failure, investigate environment differences:
- Missing environment variables
- Different Node/Bun versions
- Timing/race conditions
- External service availability
5. Fix CI-Specific Failures
For failures that only occur in CI:
- Check the CI logs - Look for the actual error message
- Check environment differences:
# Compare local vs CI environment bun --version node --version - Check for parallelism issues - Some tests may not be parallel-safe (see RLS tests)
- Investigate flaky tests - See "Handling Flaky Tests" section below
6. Handling Flaky Tests
CRITICAL: Do not simply re-run CI hoping tests will pass. Flaky tests indicate real problems that must be diagnosed and fixed.
When a test passes locally but fails in CI (or fails intermittently):
Step 1: Identify the Flakiness Pattern
Run the failing test multiple times locally:
# Run 10 times to check for intermittent failures
for i in {1..10}; do bun test path/to/flaky.test.ts && echo "Pass $i" || echo "FAIL $i"; done
Step 2: Diagnose the Root Cause
Common causes of flaky tests:
| Symptom | Root Cause | Fix |
|---|---|---|
| Different results each run | Non-deterministic data (random IDs, timestamps) | Use fixed test data or sort before comparing |
| Timeout failures | Async operation too slow or never resolves | Add proper await, increase timeout, or fix hanging promise |
| Race conditions | Test doesn't wait for async side effects | Use proper async/await, add waitFor(), or test the callback |
| Order-dependent failures | Test relies on state from previous test | Ensure proper setup/teardown isolation |
| Parallel execution conflicts | Tests share mutable state (DB, globals, env vars) | Use unique test data, proper isolation helpers |
| External service failures | Test depends on real API availability | Mock the service or handle unavailability gracefully |
Step 3: Fix the Test (Not Just Re-run)
Always fix the underlying issue:
// BAD: Non-deterministic - order not guaranteed
const results = await db.query.users.findMany()
expect(results).toEqual([user1, user2])
// GOOD: Sort before comparing
const results = await db.query.users.findMany()
expect(results.sort((a, b) => a.id.localeCompare(b.id))).toEqual([user1, user2].sort((a, b) => a.id.localeCompare(b.id)))
// BAD: Race condition - side effect may not be complete
triggerAsyncOperation()
expect(sideEffect).toBe(true)
// GOOD: Wait for the operation
await triggerAsyncOperation()
expect(sideEffect).toBe(true)
// BAD: Timing-dependent
await sleep(100) // Hope this is enough time
expect(result).toBeDefined()
// GOOD: Poll for condition
await waitFor(() => expect(result).toBeDefined())
Step 4: Verify the Fix
After fixing:
- Run the test 10+ times locally to confirm it's stable
- Push and verify it passes in CI
- If it still fails in CI, there's an environment difference to investigate
7. Push Fixes and Verify
After fixing issues:
# Stage and commit fixes
git add -A
git commit -m "fix: resolve failing checks
- [describe what was fixed]
Co-Authored-By: Claude <noreply@anthropic.com>"
# Push to trigger CI
git push
Then wait for CI to complete and verify all checks pass:
mcp__github__get_pull_request_status with owner, repo, and pull_number
8. Iterate Until Green
Repeat steps 4-7 until all checks pass. Common iteration scenarios:
- New failures appear - Your fix may have caused regressions
- Flaky test still fails - Revisit "Handling Flaky Tests" section, dig deeper into root cause
- CI timeout - Tests may be too slow, need optimization
Remember: The goal is a stable, passing test suite - not a lucky CI run. Every fix should address the root cause.
9. Address PR Review Feedback
After CI checks pass, review and address any PR comments left by reviewers.
Step 1: Fetch PR Comments
Get all review comments on the PR:
mcp__github__get_pull_request_comments with owner, repo, and pull_number
Also get the formal reviews:
mcp__github__get_pull_request_reviews with owner, repo, and pull_number
Step 2: Categorize Each Comment
For each comment, determine if it is:
| Category | Description | Action |
|---|---|---|
| Valid & Actionable | Identifies a real issue, bug, or improvement | Implement the fix |
| Valid but Won't Fix | Correct observation but intentional design choice | Reply explaining the rationale |
| Already Addressed | Issue was fixed in a subsequent commit | Resolve the comment |
| Invalid/Misunderstanding | Based on incorrect assumptions about the code | Reply with clarification |
| Nitpick/Optional | Style preference or minor suggestion | Implement if quick, otherwise discuss |
Step 3: Incorporate Valid Feedback
For each valid comment:
- Read the specific file and line mentioned in the comment
- Understand the concern - What issue is the reviewer pointing out?
- Implement the fix - Make the necessary code changes
- Reply to the comment - Briefly explain what was changed
mcp__github__add_issue_comment with owner, repo, issue_number, and body
Or reply directly to the review comment thread.
Step 4: Resolve Addressed Comments
After incorporating feedback or providing clarification:
For comments you addressed: The comment should be resolved to indicate the feedback was incorporated. If GitHub MCP supports resolving comments, use that. Otherwise, reply with "Done" or "Fixed in [commit hash]" to signal completion.
For invalid comments: Reply with a clear, respectful explanation of why the current implementation is correct or intentional. Include:
- What the code actually does
- Why it's designed this way
- Any relevant context the reviewer may have missed
Example reply for invalid feedback:
This is intentional - the `userId` here refers to the authenticated user making the request, not the target user. The authorization check happens in the middleware at line 45, so by this point we know the user has permission.
Step 5: Handle Review Requests
If the PR has "Changes Requested" status:
- Address all blocking comments from that review
- Re-request review from the reviewer once changes are made:
gh pr edit <PR_NUMBER> --add-reviewer <USERNAME>
Common Review Feedback Patterns
| Feedback Type | How to Address |
|---|---|
| Missing error handling | Add try/catch or Result type handling |
| Type safety concerns | Add proper types, remove any |
| Missing tests | Add test cases for the mentioned scenarios |
| Security issues | Fix immediately, these are blocking |
| Performance concerns | Evaluate and optimize if valid |
| Code clarity | Rename variables, add comments, or refactor |
| Breaking changes | Ensure backwards |
Content truncated.
More by flowglad
View all skills by flowglad →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversOptimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Easily manage, lookup, and buy ENS domains with seamless integration to the Ethereum Name Service (ENSdomains) using fle
By Sentry. MCP server and CLI that provides tools for AI agents working on iOS and macOS Xcode projects. Build, test, li
Create modern React UI components instantly with Magic AI Agent. Integrates with top IDEs for fast, stunning design and
Vizro creates and validates data-visualization dashboards from natural language, auto-generating chart code and interact
Boost Postgres performance with Postgres MCP Pro—AI-driven index tuning, health checks, and safe, intelligent SQL optimi
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.