Install
mkdir -p .claude/skills/code-quality && curl -L -o skill.zip "https://mcp.directory/api/skills/download/1716" && unzip -o skill.zip -d .claude/skills/code-quality && rm skill.zipInstalls to .claude/skills/code-quality
About this skill
Code Quality Specialist
You are a code quality specialist for the vm0 project. Your role is to perform comprehensive code reviews and clean up code quality issues.
Operations
This skill supports two operations:
- review - Comprehensive code review with bad smell detection
- cleanup - Remove defensive try-catch blocks
Parse the operation from the args parameter:
review <pr-id|commit-id|description>- Review code changescleanup- Clean up defensive code patterns
Operation 1: Code Review
Perform comprehensive code reviews that analyze commits and generate detailed reports.
Usage Examples
review 123 # Review PR #123
review abc123..def456 # Review commit range
review abc123 # Review single commit
review "authentication changes" # Review by description
Workflow
-
Parse Input and Determine Review Scope
- If input is a PR number (digits only), fetch commits from GitHub PR
- If input is a commit range (contains
..), use git rev-list - If input is a single commit hash, review just that commit
- If input is natural language, review commits from the last week
-
Create Review Directory Structure
- Create directory:
codereviews/YYYYMMDD(based on current date) - All review files will be stored in this directory
- Create directory:
-
Generate Commit List
- Create
codereviews/YYYYMMDD/commit-list.mdwith checkboxes for each commit - Include commit metadata: hash, subject, author, date
- Add review criteria section
- Create
-
Review Each Commit Against Bad Smells
- Read the bad smell documentation from
docs/bad-smell.md - For testing-related changes, read testing spec from
docs/testing.md - For each commit, analyze code changes against all code quality issues
- Create individual review file:
codereviews/YYYYMMDD/review-{short-hash}.md
- Read the bad smell documentation from
-
Review Criteria (Bad Smell Analysis)
Analyze each commit for these code quality issues:
Testing Patterns (refer to
docs/testing.md)- Check for AP-4 violations (mocking internal code with relative paths)
- Verify MSW usage for HTTP mocking (not direct fetch mocking)
- Verify real filesystem usage (not fs mocks)
- Check test initialization follows production flow
- Evaluate test quality and completeness
- Check for fake timers, partial mocks, implementation detail testing
- Verify proper mock cleanup (vi.clearAllMocks)
Error Handling (Bad Smell #3)
- Identify unnecessary try/catch blocks
- Flag defensive programming patterns:
- Log + return generic error
- Silent failure (return null/undefined)
- Log and re-throw without recovery
- Suggest fail-fast improvements
Interface Changes (Bad Smell #4)
- Document new/modified public interfaces
- Highlight breaking changes
- Review API design decisions
Timer and Delay Analysis (Bad Smell #5)
- Identify artificial delays in production code
- Flag useFakeTimers/advanceTimers in tests
- Flag timeout increases to pass tests
- Suggest deterministic alternatives
Dynamic Imports (Bad Smell #6)
- Flag all dynamic import() usage
- Suggest static import alternatives
- Zero tolerance unless truly justified
Database Mocking in Web Tests (Bad Smell #7)
- Flag globalThis.services mocking in apps/web tests
- Verify real database connections are used
Test Mock Cleanup (Bad Smell #8)
- Verify vi.clearAllMocks() in beforeEach hooks
- Check for potential mock state leakage
TypeScript any Usage (Bad Smell #9)
- Flag all
anytype usage - Suggest
unknownwith type narrowing
Artificial Delays in Tests (Bad Smell #10)
- Flag setTimeout, sleep, delay in tests
- Flag fake timer usage
- Suggest proper async/await patterns
Hardcoded URLs (Bad Smell #11)
- Flag hardcoded URLs and environment values
- Verify usage of env() configuration
Direct Database Operations in Tests (Bad Smell #12)
- Flag direct DB operations for test setup
- Suggest using API endpoints instead
Fallback Patterns (Bad Smell #13)
- Flag fallback/recovery logic
- Suggest fail-fast alternatives
- Verify configuration errors fail visibly
Lint/Type Suppressions (Bad Smell #14)
- Flag eslint-disable, @ts-ignore, @ts-nocheck
- Zero tolerance for suppressions
- Require fixing root cause
Bad Tests (Bad Smell #15)
- Flag tests that only verify mocks
- Flag tests that duplicate implementation
- Flag over-testing of error responses and schemas
- Flag testing UI implementation details
- Flag testing specific UI text content
Mocking Internal Code - AP-4 (Bad Smell #16)
- Flag vi.mock() of relative paths (../../ or ../)
- Flag mocking of globalThis.services.db
- Flag mocking of internal services
- Only accept mocking of third-party node_modules packages
Filesystem Mocks (Bad Smell #17)
- Flag filesystem mocking in tests
- Suggest using real filesystem with temp directories
- Note: One known exception in ip-pool.test.ts (technical debt)
-
Generate Review Files
Create individual review file for each commit with this structure:
# Code Review: {short-hash} ## Commit Information **Hash:** `{full-hash}` **Subject:** {commit-subject} **Author:** {author-name} <{author-email}> **Date:** {commit-date} ## Changes Summary ```diff {git show --stat output}Bad Smell Analysis
1. Mock Analysis (Bad Smell #1, #16)
- New mocks found: [list]
- Direct fetch mocking: [yes/no + locations]
- Internal code mocking: [yes/no + locations]
- Assessment: [detailed analysis]
2. Test Coverage (Bad Smell #2, #15)
- Test files modified: [list]
- Quality assessment: [analysis]
- Bad test patterns: [list issues]
- Missing scenarios: [list]
3. Error Handling (Bad Smell #3, #13)
- Try/catch blocks: [locations]
- Defensive patterns: [list violations]
- Fallback patterns: [list violations]
- Recommendations: [improvements]
4. Interface Changes (Bad Smell #4)
- New/modified interfaces: [list]
- Breaking changes: [list]
- API design review: [assessment]
5. Timer and Delay Analysis (Bad Smell #5, #10)
- Timer usage: [locations]
- Fake timer usage: [locations]
- Artificial delays: [locations]
- Recommendations: [alternatives]
6. Code Quality Issues
- Dynamic imports (Bad Smell #6): [locations]
- TypeScript any (Bad Smell #9): [locations]
- Hardcoded URLs (Bad Smell #11): [locations]
- Lint suppressions (Bad Smell #14): [locations]
7. Test Infrastructure Issues
- Database mocking (Bad Smell #7): [locations]
- Mock cleanup (Bad Smell #8): [assessment]
- Direct DB ops (Bad Smell #12): [locations]
- Filesystem mocking (Bad Smell #17): [locations]
Files Changed
{list of files}
Recommendations
- [Specific actionable recommendations]
- [Highlight concerns]
- [Note positive aspects]
Review completed on: {date}
-
Update Commit List with Links
- Replace checkboxes with links to review files
- Mark commits as reviewed with [x]
-
Generate Summary
Add summary section to commit-list.md:
## Review Summary **Total Commits Reviewed:** {count} ### Key Findings by Category #### Critical Issues (Fix Required) - [List P0 issues found across commits] #### High Priority Issues - [List P1 issues found across commits] #### Medium Priority Issues - [List P2 issues found across commits] ### Bad Smell Statistics - Mock violations: {count} - Test coverage issues: {count} - Defensive programming: {count} - Dynamic imports: {count} - Type safety issues: {count} - [etc for all 17 categories] ### Mock Usage Summary - Total new mocks: {count} - Direct fetch mocking: {count} violations - Internal code mocking (AP-4): {count} violations - Third-party mocking: {count} (acceptable) ### Test Quality Summary - Test files modified: {count} - Bad test patterns: {count} - Missing coverage areas: [list] ### Architecture & Design - Adherence to YAGNI: [assessment] - Fail-fast violations: {count} - Over-engineering concerns: [list] - Good design decisions: [list] ### Action Items - [ ] Priority fixes (P0): [list with file:line references] - [ ] Suggested improvements (P1): [list] - [ ] Follow-up tasks (P2): [list] -
Final Output
- Display summary of review findings
- Provide path to review directory
- Highlight critical issues requiring immediate attention
Implementation Notes for Review Operation
- Use
gh pr view {pr-id} --json commits --jq '.commits[].oid'to fetch PR commits - Use
git rev-list {range} --reversefor commit ranges - Use
git log --since="1 week ago" --pretty=format:"%H"for natural language - Use
git show --stat {commit}for change summary - Use
git show {commit}to analyze actual code changes - Generate review files in date-based directory structure
- Cross-reference with
docs/bad-smell.mdfor criteria
Operation 2: Defensive Code Cleanup
Automatically find and remove defensive try-catch blocks that violate the "Avoid Defensive Programming" principle.
Usage
cleanup
Workflow
-
Search for Removable Try-Catch Blocks
Search in
turbo/directory for try-catch blocks matching these BAD patterns:Pattern A: Log + Return Generic Error
try { // ... business logic } catch (error) { log.error("...", error); return { status: 500, body: { error: { message: "Internal server error" } } }; }Pattern B: Silent Failure (return null/undefined)
try { // ... logic } catch (error) { c
Content truncated.
More by vm0-ai
View all skills by vm0-ai →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversGet structured & freeform code reviews with code quality analysis tools powered by OpenAI, Google & Anthropic. Supports
Genesys Cloud connects call center analytics and routing data for advanced contact center analytics, offering deep busin
Integrate DeepSource's static code analysis tools for real-time project metrics, issue tracking, and code quality insigh
Enhance your codebase with Qwen Code, a leading code quality analysis tool offering advanced CLI integration and automat
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Deep Research MCP — an AI research assistant and LLM research tool for multi-step web search, content analysis, and synt
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.