hypothesis-driven-debugging
Investigate compiler failures, test errors, or unexpected behavior through systematic minimal reproduction, 3-hypothesis testing, and verification. Always re-run builds and tests after changes.
Install
mkdir -p .claude/skills/hypothesis-driven-debugging && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7634" && unzip -o skill.zip -d .claude/skills/hypothesis-driven-debugging && rm skill.zipInstalls to .claude/skills/hypothesis-driven-debugging
About this skill
Hypothesis-Driven Debugging
A systematic, rigorous approach to debugging failures in the F# compiler codebase.
When to Use This Skill
Use this skill when:
- Investigating test failures (unit tests, integration tests, end-to-end tests)
- Debugging build errors or compilation failures
- Analyzing unexpected runtime behavior
- Troubleshooting performance regressions
- Examining warning/error message issues
Core Principles
- Always start with a minimal reproduction
- Form multiple competing hypotheses
- Design verification for each hypothesis
- Document findings rigorously
- Re-run builds and tests after every change
Process
Step 1: Create Minimal Reproduction
Before forming hypotheses, create the smallest possible reproduction:
-
Extract the failure:
# For test failures - run just the failing test dotnet test -- --filter-method "*YourTest*" # For build failures - try to isolate the problematic file # Create a minimal .fs file that reproduces the issue -
Reduce to essentials:
- Remove unrelated code
- Simplify to the core issue
- Verify the minimal case still fails
-
Document the repro:
## Minimal Reproduction File: test-case.fs Command: dotnet test -- --filter-method "*TestName*" Expected: <expected behavior> Actual: <actual behavior>
Step 2: Form 3 Hypotheses
Always form at least 3 competing hypotheses about the root cause:
## Hypothesis 1: [Brief description]
**Theory**: The failure occurs because...
**How to verify**: Run/change X and observe Y
**Verification result**: [To be filled]
**Implications**: If true, this means...
## Hypothesis 2: [Brief description]
**Theory**: The failure occurs because...
**How to verify**: Add instrumentation/logging at point Z
**Verification result**: [To be filled]
**Implications**: If true, this means...
## Hypothesis 3: [Brief description]
**Theory**: The failure occurs because...
**How to verify**: Check assumption A by running test B
**Verification result**: [To be filled]
**Implications**: If true, this means...
Step 3: Verification Methods
For each hypothesis, use one or more verification methods:
Code Instrumentation
// Add temporary debugging output
printfn "DEBUG: Value at checkpoint: %A" someValue
printfn "DEBUG: Entering function X with args: %A %A" arg1 arg2
Minimal Test Cases
// Create focused test to verify specific behavior
[<Test>]
let ``Hypothesis 1 verification test`` () =
let result = functionUnderTest input
result |> should equal expectedValue
Build with Different Flags
# Try different configurations
./build.sh -c Debug
./build.sh -c Release
# Compare outputs
diff debug-output.log release-output.log
Targeted Logging
# Enable verbose logging for specific component
export FSHARP_COMPILER_VERBOSE=1
dotnet build
Step 4: Document Findings
Maintain a HYPOTHESIS.md file in the working directory:
# Hypothesis Investigation
## Issue Summary
Brief description of the failure/bug being investigated.
## Minimal Reproduction
[Code/commands to reproduce]
## Hypotheses
### Hypothesis 1: Token position tracking issue
**Theory**: The warning check compares line numbers but lastNonCommentTokenLine is not being updated correctly.
**How to verify**: Add printfn debugging in LexFilter.fs to log every token and its line number.
**Verification result**: ✅ CONFIRMED - Logging showed LBRACE tokens were updating the tracking when they shouldn't.
**Implications**: Need to exclude LBRACE and potentially other structural tokens from tracking.
### Hypothesis 2: Lexer pattern matching order
**Theory**: The /// pattern might be matched after other patterns, losing context.
**How to verify**: Check lex.fsl pattern order and add logging in the /// rule.
**Verification result**: ❌ DENIED - Pattern order is correct; /// is matched specifically.
**Implications**: Issue is not in the lexer pattern matching.
### Hypothesis 3: Test expectations wrong
**Theory**: The test expectations might not match actual compiler behavior.
**How to verify**: Manually compile test code and check actual warning positions.
**Verification result**: ⚠️ PARTIAL - Some tests had wrong expectations, but underlying issue still exists.
**Implications**: Fixed test expectations, but still need to address token tracking.
## Resolution
[Final solution and verification]
## Lessons Learned
- What worked well
- What to do differently next time
- Patterns to remember
Step 5: Critical - Always Re-run Tests
ABSOLUTELY REQUIRED: After implementing any fix:
-
Build from scratch:
./build.sh -c Release # Record: Time, exit code, number of errors -
Run affected tests:
# For targeted testing dotnet test -- --filter-class "*AffectedTestSuite*" # Record: Passed, Failed, Skipped, Time -
Verify the fix:
- Run the minimal reproduction - confirm it passes
- Run related tests - confirm no regressions
- Build the full project - confirm no new errors
-
Document results:
## Verification Results Build: - Command: ./build.sh -c Release - Time: 4m 23s - Errors: 0 Tests: - Command: dotnet test -- --filter-class "*XmlDocTests*" - Total: 61 - Passed: 56 - Failed: 0 - Skipped: 5 - Time: 2.1s Minimal Repro: - Status: ✅ PASSING
Example Workflow
# 1. Observe failure
dotnet test -- --filter-class "*XmlDocTests*"
# Result: 15 tests failing
# 2. Create minimal repro
cat > test-case.fs <<EOF
type R = { /// field doc
Field: int
}
EOF
dotnet fsc test-case.fs
# Observe: Warning FS3879 incorrectly triggered
# 3. Form hypotheses (in HYPOTHESIS.md)
# - H1: LBRACE token incorrectly tracked
# - H2: Lexer pattern issue
# - H3: Test expectations wrong
# 4. Verify H1
# Add: printfn "DEBUG: Token %A at line %d" token lineNum
./build.sh -c Release && dotnet test ...
# Result: Confirms LBRACE is being tracked
# 5. Implement fix
# Exclude LBRACE from tracking in LexFilter.fs
# 6. CRITICAL: Re-run everything
./build.sh -c Release
# 4m 44.9s, 0 errors
dotnet test -- --filter-class "*XmlDocTests*"
# 61 total, 56 passed, 0 failed, 5 skipped, 2s
# 7. Verify minimal repro
dotnet fsc test-case.fs
# No warning - ✅ FIXED
# 8. Update HYPOTHESIS.md with results
# 9. Commit with evidence
Anti-Patterns to Avoid
❌ Don't:
- Skip the minimal reproduction
- Form only one hypothesis
- Make changes without verification
- Forget to re-run tests after fixes
- Claim "fixed" without build evidence
✅ Do:
- Start with smallest possible repro
- Consider multiple explanations
- Verify each hypothesis systematically
- Always re-run build and tests
- Document commands, timings, and results
Integration with Development Workflow
After using this skill:
- Clean up temporary debugging code
- Remove or archive
HYPOTHESIS.md - Update documentation with lessons learned
- Add regression tests if appropriate
- Consider whether findings reveal deeper issues
References
- Software Debugging Techniques
- Scientific Method Applied to Software
- F# Compiler build guide:
docs/DEVGUIDE.md - F# Compiler testing guide:
docs/testing.md
More by dotnet
View all skills by dotnet →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversSupercharge your NextJS projects with AI-powered tools for diagnostics, upgrades, and docs. Accelerate development and b
Connect to Currents Test Results for AI-driven analysis of test failures, suite optimization, and enhanced CI/CD trouble
Boost your AI code assistant with Context7: inject real-time API documentation from OpenAPI specification sources into y
Enhance software testing with Playwright MCP: Fast, reliable browser automation, an innovative alternative to Selenium s
Use Chrome DevTools for web site test speed, debugging, and performance analysis. The essential chrome developer tools f
Supercharge your AI code assistant with GitMCP—get accurate, up-to-date code and API docs from any GitHub project. Free,
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.