triage-ci-flake

0
0
Source

Use when CI tests fail on main branch after PR merge, or when investigating flaky test failures in CI environments

Install

mkdir -p .claude/skills/triage-ci-flake && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6235" && unzip -o skill.zip -d .claude/skills/triage-ci-flake && rm skill.zip

Installs to .claude/skills/triage-ci-flake

About this skill

Triage CI Failure

Overview

Systematic workflow for triaging and fixing test failures in CI, especially flaky tests that pass locally but fail in CI. Tests that made it to main are usually flaky due to timing, bundling, or environment differences.

CRITICAL RULE: You MUST run the reproduction workflow before proposing any fixes. No exceptions.

When to Use

  • CI test fails on main branch after PR was merged
  • Test passes locally but fails in CI
  • Test failure labeled as "flaky" or intermittent
  • E2E or integration test timing out in CI only

MANDATORY First Steps

YOU MUST EXECUTE THESE COMMANDS. Reading code or analyzing logs does NOT count as reproduction.

  1. Extract suite name, test name, and error from CI logs
  2. EXECUTE: Kill port 3000 to avoid conflicts
  3. EXECUTE: pnpm dev $SUITE_NAME (use run_in_background=true)
  4. EXECUTE: Wait for server to be ready (check with curl or sleep)
  5. EXECUTE: Run the specific failing test with Playwright directly (npx playwright test test/TEST_SUITE_NAME/e2e.spec.ts:31:3 --headed -g "TEST_DESCRIPTION_TARGET_GOES_HERE")
  6. If test passes, EXECUTE: pnpm prepare-run-test-against-prod
  7. EXECUTE: pnpm dev:prod $SUITE_NAME and run test again

Only after EXECUTING these commands and seeing their output can you proceed to analysis and fixes.

"Analysis from logs" is NOT reproduction. You must RUN the commands.

Core Workflow

digraph triage_ci {
    "CI failure reported" [shape=box];
    "Extract details from CI logs" [shape=box];
    "Identify suite and test name" [shape=box];
    "Run dev server: pnpm dev $SUITE" [shape=box];
    "Run specific test by name" [shape=box];
    "Did test fail?" [shape=diamond];
    "Debug with dev code" [shape=box];
    "Run prepare-run-test-against-prod" [shape=box];
    "Run: pnpm dev:prod $SUITE" [shape=box];
    "Run specific test again" [shape=box];
    "Did test fail now?" [shape=diamond];
    "Debug bundling issue" [shape=box];
    "Unable to reproduce - check logs" [shape=box];
    "Fix and verify" [shape=box];

    "CI failure reported" -> "Extract details from CI logs";
    "Extract details from CI logs" -> "Identify suite and test name";
    "Identify suite and test name" -> "Run dev server: pnpm dev $SUITE";
    "Run dev server: pnpm dev $SUITE" -> "Run specific test by name";
    "Run specific test by name" -> "Did test fail?";
    "Did test fail?" -> "Debug with dev code" [label="yes"];
    "Did test fail?" -> "Run prepare-run-test-against-prod" [label="no"];
    "Run prepare-run-test-against-prod" -> "Run: pnpm dev:prod $SUITE";
    "Run: pnpm dev:prod $SUITE" -> "Run specific test again";
    "Run specific test again" -> "Did test fail now?";
    "Did test fail now?" -> "Debug bundling issue" [label="yes"];
    "Did test fail now?" -> "Unable to reproduce - check logs" [label="no"];
    "Debug with dev code" -> "Fix and verify";
    "Debug bundling issue" -> "Fix and verify";
}

Step-by-Step Process

1. Extract CI Details

From CI logs or GitHub Actions URL, identify:

  • Suite name: Directory name (e.g., i18n, fields, lexical)
  • Test file: Full path (e.g., test/i18n/e2e.spec.ts)
  • Test name: Exact test description
  • Error message: Full stack trace
  • Test type: E2E (Playwright) or integration (Vitest)

2. Reproduce with Dev Code

CRITICAL: Always run the specific test by name, not the full suite.

SERVER MANAGEMENT RULES:

  1. ALWAYS kill all servers before starting a new one
  2. NEVER assume ports are free
  3. ALWAYS wait for server ready confirmation before running tests
# ========================================
# STEP 2A: STOP ALL SERVERS
# ========================================
lsof -ti:3000 | xargs kill -9 2>/dev/null || echo "Port 3000 clear"

# ========================================
# STEP 2B: START DEV SERVER
# ========================================
# Start dev server with the suite (in background with run_in_background=true)
pnpm dev $SUITE_NAME

# ========================================
# STEP 2C: WAIT FOR SERVER READY
# ========================================
# Wait for server to be ready (REQUIRED - do not skip)
until curl -s http://localhost:3000/admin > /dev/null 2>&1; do sleep 1; done && echo "Server ready"

# ========================================
# STEP 2D: RUN SPECIFIC TEST
# ========================================
# Run ONLY the specific failing test using Playwright directly
# For E2E tests (DO NOT use pnpm test:e2e as it spawns its own server):
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts -g "exact test name"

# For integration tests:
pnpm test:int $SUITE_NAME -t "exact test name"

Did the test fail?

  • YES: You reproduced it! Proceed to debug with dev code.
  • NO: Continue to step 3 (bundled code test).

3. Reproduce with Bundled Code

If test passed with dev code, the issue is likely in bundled/production code.

IMPORTANT: You MUST stop the dev server before starting prod server.

# ========================================
# STEP 3A: STOP ALL SERVERS (INCLUDING DEV SERVER FROM STEP 2)
# ========================================
lsof -ti:3000 | xargs kill -9 2>/dev/null || echo "Port 3000 clear"

# ========================================
# STEP 3B: BUILD AND PACK FOR PROD
# ========================================
# Build all packages and pack them (this takes time - be patient)
pnpm prepare-run-test-against-prod

# ========================================
# STEP 3C: START PROD SERVER
# ========================================
# Start prod dev server (in background with run_in_background=true)
pnpm dev:prod $SUITE_NAME

# ========================================
# STEP 3D: WAIT FOR SERVER READY
# ========================================
# Wait for server to be ready (REQUIRED - do not skip)
until curl -s http://localhost:3000/admin > /dev/null 2>&1; do sleep 1; done && echo "Server ready"

# ========================================
# STEP 3E: RUN SPECIFIC TEST
# ========================================
# Run the specific test again using Playwright directly
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts -g "exact test name"
# OR for integration tests:
pnpm test:int $SUITE_NAME -t "exact test name"

Did the test fail now?

  • YES: Bundling or production build issue. Look for:
    • Missing exports in package.json
    • Build configuration problems
    • Code that behaves differently when bundled
  • NO: Unable to reproduce locally. Proceed to step 4.

4. Unable to Reproduce

If you cannot reproduce locally after both attempts:

  • Review CI logs more carefully for environment differences
  • Check for race conditions (run test multiple times: for i in {1..10}; do pnpm test:e2e...; done)
  • Look for CI-specific constraints (memory, CPU, timing)
  • Consider if it's a true race condition that's highly timing-dependent

Common Flaky Test Patterns

Race Conditions

  • Page navigating while assertions run
  • Network requests not settled before assertions
  • State updates not completed

Fix patterns:

  • Use Playwright's web-first assertions (toBeVisible(), toHaveText())
  • Wait for specific conditions, not arbitrary timeouts
  • Use waitForFunction() with condition checks

Test Pollution

  • Tests leaving data in database
  • Shared state between tests
  • Missing cleanup in afterEach

Fix patterns:

  • Track created IDs and clean up in afterEach
  • Use isolated test data
  • Don't use deleteAll that affects other tests

Timing Issues

  • setTimeout/sleep instead of condition-based waiting
  • Not waiting for page stability
  • Animations/transitions not complete

Fix patterns:

  • Use waitForPageStability() helper
  • Wait for specific DOM states
  • Use Playwright's built-in waiting mechanisms

Linting Considerations

When fixing e2e tests, be aware of these eslint rules:

  • playwright/no-networkidle - Avoid waitForLoadState('networkidle') (use condition-based waiting instead)
  • payload/no-wait-function - Avoid custom wait() functions (use Playwright's built-in waits)
  • payload/no-flaky-assertions - Avoid non-retryable assertions
  • playwright/prefer-web-first-assertions - Use built-in Playwright assertions

Existing code may violate these rules - when adding new code, follow the rules even if existing code doesn't.

Verification

After fixing:

# Ensure dev server is running on port 3000
# Run test multiple times to confirm stability
for i in {1..10}; do
  pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts -g "exact test name" || break
done

# Run full suite
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts

# If you modified bundled code, test with prod build
lsof -ti:3000 | xargs kill -9 2>/dev/null
pnpm prepare-run-test-against-prod
pnpm dev:prod $SUITE_NAME
until curl -s http://localhost:3000/admin > /dev/null; do sleep 1; done
pnpm exec playwright test test/$SUITE_NAME/e2e.spec.ts

The Iron Law

NO FIX WITHOUT REPRODUCTION FIRST

If you propose a fix before completing steps 1-3 of the workflow, you've violated this skill.

This applies even when:

  • The fix seems obvious from the logs
  • You've seen this error before
  • Time pressure from the team
  • You're confident about the root cause
  • The logs show clear stack traces

No exceptions. Run the reproduction workflow first.

Rationalization Table

Every excuse for skipping reproduction, and why it's wrong:

RationalizationReality
"The logs show the exact error"Logs show symptoms, not root cause. Reproduce.
"I can see the problem in the code"You're guessing. Reproduce to confirm.
"This is obviously a race condition"Maybe. Reproduce to be sure.
"I've seen

Content truncated.

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

641968

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

590705

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

338397

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318395

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

450339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.