triage-ci-flake
Use when CI tests fail on main branch after PR merge, or when investigating flaky test failures in CI environments
Install
mkdir -p .claude/skills/triage-ci-flake && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6235" && unzip -o skill.zip -d .claude/skills/triage-ci-flake && rm skill.zipInstalls to .claude/skills/triage-ci-flake
About this skill
Triage CI Failure
Overview
Systematic workflow for triaging and fixing test failures in CI, especially flaky tests that pass locally but fail in CI. Tests that made it to main are usually flaky due to timing, bundling, or environment differences.
CRITICAL RULE: You MUST run the reproduction workflow before proposing any fixes. No exceptions.
When to Use
- CI test fails on
mainbranch after PR was merged - Test passes locally but fails in CI
- Test failure labeled as "flaky" or intermittent
- E2E or integration test timing out in CI only
MANDATORY First Steps
YOU MUST EXECUTE THESE COMMANDS. Reading code or analyzing logs does NOT count as reproduction.
- Extract suite name, test name, and error from CI logs
- EXECUTE: Kill port 3000 to avoid conflicts
- EXECUTE:
pnpm dev $SUITE_NAME(use run_in_background=true) - EXECUTE: Wait for server to be ready (check with curl or sleep)
- EXECUTE: Run the specific failing test with Playwright directly (npx playwright test test/TEST_SUITE_NAME/e2e.spec.ts:31:3 --headed -g "TEST_DESCRIPTION_TARGET_GOES_HERE")
- If test passes, EXECUTE:
pnpm prepare-run-test-against-prod - EXECUTE:
pnpm dev:prod $SUITE_NAMEand run test again
Only after EXECUTING these commands and seeing their output can you proceed to analysis and fixes.
"Analysis from logs" is NOT reproduction. You must RUN the commands.
Core Workflow
digraph triage_ci {
"CI failure reported" [shape=box];
"Extract details from CI logs" [shape=box];
"Identify suite and test name" [shape=box];
"Run dev server: pnpm dev $SUITE" [shape=box];
"Run specific test by name" [shape=box];
"Did test fail?" [shape=diamond];
"Debug with dev code" [shape=box];
"Run prepare-run-test-against-prod" [shape=box];
"Run: pnpm dev:prod $SUITE" [shape=box];
"Run specific test again" [shape=box];
"Did test fail now?" [shape=diamond];
"Debug bundling issue" [shape=box];
"Unable to reproduce - check logs" [shape=box];
"Fix and verify" [shape=box];
"CI failure reported" -> "Extract details from CI logs";
"Extract details from CI logs" -> "Identify suite and test name";
"Identify suite and test name" -> "Run dev server: pnpm dev $SUITE";
"Run dev server: pnpm dev $SUITE" -> "Run specific test by name";
"Run specific test by name" -> "Did test fail?";
"Did test fail?" -> "Debug with dev code" [label="yes"];
"Did test fail?" -> "Run prepare-run-test-against-prod" [label="no"];
"Run prepare-run-test-against-prod" -> "Run: pnpm dev:prod $SUITE";
"Run: pnpm dev:prod $SUITE" -> "Run specific test again";
"Run specific test again" -> "Did test fail now?";
"Did test fail now?" -> "Debug bundling issue" [label="yes"];
"Did test fail now?" -> "Unable to reproduce - check logs" [label="no"];
"Debug with dev code" -> "Fix and verify";
"Debug bundling issue" -> "Fix and verify";
}
Step-by-Step Process
1. Extract CI Details
From CI logs or GitHub Actions URL, identify:
- Suite name: Directory name (e.g.,
i18n,fields,lexical) - Test file: Full path (e.g.,
test/i18n/e2e.spec.ts) - Test name: Exact test description
- Error message: Full stack trace
- Test type: E2E (Playwright) or integration (Vitest)
2. Reproduce with Dev Code
CRITICAL: Always run the specific test by name, not the full suite.
SERVER MANAGEMENT RULES:
- ALWAYS kill all servers before starting a new one
- NEVER assume ports are free
- ALWAYS wait for server ready confirmation before running tests
# ========================================
# STEP 2A: STOP ALL SERVERS
# ========================================
lsof -ti:3000 | xargs kill -9 2>/dev/null || echo "Port 3000 clear"
# ========================================
# STEP 2B: START DEV SERVER
# ========================================
# Start dev server with the suite (in background with run_in_background=true)
pnpm dev $SUITE_NAME
# ========================================
# STEP 2C: WAIT FOR SERVER READY
# ========================================
# Wait for server to be ready (REQUIRED - do not skip)
until curl -s http://localhost:3000/admin > /dev/null 2>&1; do sleep 1; done && echo "Server ready"
# ========================================
# STEP 2D: RUN SPECIFIC TEST
# ========================================
# Run ONLY the specific failing test using Playwright directly
# For E2E tests (DO NOT use pnpm test:e2e as it spawns its own server):
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts -g "exact test name"
# For integration tests:
pnpm test:int $SUITE_NAME -t "exact test name"
Did the test fail?
- ✅ YES: You reproduced it! Proceed to debug with dev code.
- ❌ NO: Continue to step 3 (bundled code test).
3. Reproduce with Bundled Code
If test passed with dev code, the issue is likely in bundled/production code.
IMPORTANT: You MUST stop the dev server before starting prod server.
# ========================================
# STEP 3A: STOP ALL SERVERS (INCLUDING DEV SERVER FROM STEP 2)
# ========================================
lsof -ti:3000 | xargs kill -9 2>/dev/null || echo "Port 3000 clear"
# ========================================
# STEP 3B: BUILD AND PACK FOR PROD
# ========================================
# Build all packages and pack them (this takes time - be patient)
pnpm prepare-run-test-against-prod
# ========================================
# STEP 3C: START PROD SERVER
# ========================================
# Start prod dev server (in background with run_in_background=true)
pnpm dev:prod $SUITE_NAME
# ========================================
# STEP 3D: WAIT FOR SERVER READY
# ========================================
# Wait for server to be ready (REQUIRED - do not skip)
until curl -s http://localhost:3000/admin > /dev/null 2>&1; do sleep 1; done && echo "Server ready"
# ========================================
# STEP 3E: RUN SPECIFIC TEST
# ========================================
# Run the specific test again using Playwright directly
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts -g "exact test name"
# OR for integration tests:
pnpm test:int $SUITE_NAME -t "exact test name"
Did the test fail now?
- ✅ YES: Bundling or production build issue. Look for:
- Missing exports in package.json
- Build configuration problems
- Code that behaves differently when bundled
- ❌ NO: Unable to reproduce locally. Proceed to step 4.
4. Unable to Reproduce
If you cannot reproduce locally after both attempts:
- Review CI logs more carefully for environment differences
- Check for race conditions (run test multiple times:
for i in {1..10}; do pnpm test:e2e...; done) - Look for CI-specific constraints (memory, CPU, timing)
- Consider if it's a true race condition that's highly timing-dependent
Common Flaky Test Patterns
Race Conditions
- Page navigating while assertions run
- Network requests not settled before assertions
- State updates not completed
Fix patterns:
- Use Playwright's web-first assertions (
toBeVisible(),toHaveText()) - Wait for specific conditions, not arbitrary timeouts
- Use
waitForFunction()with condition checks
Test Pollution
- Tests leaving data in database
- Shared state between tests
- Missing cleanup in
afterEach
Fix patterns:
- Track created IDs and clean up in
afterEach - Use isolated test data
- Don't use
deleteAllthat affects other tests
Timing Issues
setTimeout/sleepinstead of condition-based waiting- Not waiting for page stability
- Animations/transitions not complete
Fix patterns:
- Use
waitForPageStability()helper - Wait for specific DOM states
- Use Playwright's built-in waiting mechanisms
Linting Considerations
When fixing e2e tests, be aware of these eslint rules:
playwright/no-networkidle- AvoidwaitForLoadState('networkidle')(use condition-based waiting instead)payload/no-wait-function- Avoid customwait()functions (use Playwright's built-in waits)payload/no-flaky-assertions- Avoid non-retryable assertionsplaywright/prefer-web-first-assertions- Use built-in Playwright assertions
Existing code may violate these rules - when adding new code, follow the rules even if existing code doesn't.
Verification
After fixing:
# Ensure dev server is running on port 3000
# Run test multiple times to confirm stability
for i in {1..10}; do
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts -g "exact test name" || break
done
# Run full suite
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts
# If you modified bundled code, test with prod build
lsof -ti:3000 | xargs kill -9 2>/dev/null
pnpm prepare-run-test-against-prod
pnpm dev:prod $SUITE_NAME
until curl -s http://localhost:3000/admin > /dev/null; do sleep 1; done
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts
The Iron Law
NO FIX WITHOUT REPRODUCTION FIRST
If you propose a fix before completing steps 1-3 of the workflow, you've violated this skill.
This applies even when:
- The fix seems obvious from the logs
- You've seen this error before
- Time pressure from the team
- You're confident about the root cause
- The logs show clear stack traces
No exceptions. Run the reproduction workflow first.
Rationalization Table
Every excuse for skipping reproduction, and why it's wrong:
| Rationalization | Reality |
|---|---|
| "The logs show the exact error" | Logs show symptoms, not root cause. Reproduce. |
| "I can see the problem in the code" | You're guessing. Reproduce to confirm. |
| "This is obviously a race condition" | Maybe. Reproduce to be sure. |
| "I've seen |
Content truncated.
More by payloadcms
View all skills by payloadcms →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversRSS Feed Parser is a powerful rss feed generator and rss link generator with RSSHub integration, perfect for creating cu
Connect to Currents Test Results for AI-driven analysis of test failures, suite optimization, and enhanced CI/CD trouble
Dual-Cycle Reasoner enables agents to detect repetitive behavior, diagnose failure causes, and recover with advanced met
Access Intercom data securely via a remote MCP server with authenticated connections for AI tools and live updates.
Trunk CI Autopilot: automatically detect and fix failing tests to keep your builds green and accelerate delivery.
Boost productivity with Task Master: an AI-powered tool for project management and agile development workflows, integrat
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.