wcag22-a11y-audit

0
0
Source

WCAG 2.2 Accessibility Audit skill that systematically evaluates web pages against 8 core Success Criteria (1.1.1, 1.4.3, 1.4.11, 2.1.1, 2.1.2, 2.4.3, 2.4.7, 4.1.2) using accessibility tree inspection and visual analysis. Use this skill when you need to perform accessibility testing/auditing on a live webpage.

Install

mkdir -p .claude/skills/wcag22-a11y-audit && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8379" && unzip -o skill.zip -d .claude/skills/wcag22-a11y-audit && rm skill.zip

Installs to .claude/skills/wcag22-a11y-audit

About this skill

WCAG 2.2 Accessibility Audit Skill

When to Use This Skill

Use this skill when the user wants to:

  • Perform accessibility testing or auditing on a live webpage
  • Evaluate a page against WCAG 2.2 Success Criteria
  • Identify accessibility barriers for keyboard and screen reader users
  • Generate a structured accessibility audit report with evidence

Tool Usage Strategy (IMPORTANT)

This skill uses a hybrid approach combining structured accessibility tree analysis and visual inspection.

Priority Order

  1. PRIMARY: Accessibility Tree (search_elements)

    • Use search_elements(tabId, query) to retrieve structured accessibility information
    • This provides role, name, value, checked, expanded, disabled, focused, etc.
    • Best for: 1.1.1, 2.1.1, 4.1.2 and element identification for other SCs
  2. SECONDARY: Visual Analysis (Screenshot + LLM)

    • Use capture_screenshot(sendToLLM=true) for visual inspection
    • Essential for: 1.4.3, 1.4.11, 2.4.7 (contrast and focus visibility)
    • Insert [[screenshot:N]] placeholders in the report for evidence
  3. KEYBOARD INTERACTION: computer tool

    • Use computer(action='key', text='Tab') for keyboard navigation testing
    • Essential for: 2.1.1, 2.1.2, 2.4.3, 2.4.7
    • Capture screenshots at key moments to document focus path

Workflow for Each Test

1. Identify scope (page/component/flow)
2. Collect evidence via search_elements and/or screenshot
3. Apply SC-specific judgment rules
4. Record Pass/Fail with evidence references
5. Provide actionable fix recommendations

Threat Context & Privacy Notes

IMPORTANT: This audit may capture sensitive information.

  • Screenshots and accessibility tree dumps may contain: account names, order details, personal data, auth tokens, internal URLs.
  • DO NOT include raw sensitive data in the final report.
  • If a screenshot contains PII, note this and recommend masking before sharing.
  • Contrast/visual judgments are estimates—always recommend verification with dedicated tools (e.g., WebAIM Contrast Checker, axe DevTools).

Pre-Audit Setup (Recommended)

Before starting the audit, confirm with the user:

  1. Target Scope: Full page, specific component, or user flow?
  2. Target Audience: Who are the primary users? (general public, internal staff, specific disability considerations)
  3. Priority SCs: Test all 8 SCs or focus on specific ones?
  4. Known Issues: Any existing accessibility issues to verify?

Success Criteria Test Procedures

SC 1.1.1 Non-text Content (Level A)

Goal: All non-text content (images, icons, controls) must have text alternatives.

Test Steps:

  1. Find images and icons:

    search_elements(tabId, "image | img")
    search_elements(tabId, "button | link | menuitem | tab | switch")
    
  2. Evaluate each element:

    • Check if name attribute exists and is meaningful
    • FAIL conditions:
      • name is empty or missing
      • name is generic: "icon", "image", "button", "img", "graphic", file names (e.g., "logo.png")
      • name duplicates visible text unnecessarily (redundant)
    • PASS conditions:
      • name accurately describes purpose or equivalent information
  3. For decorative content:

    • If truly decorative, element should have role="presentation" or role="none", or name="" (explicitly empty)

Common Failures:

  • Icon buttons with no accessible name (screen reader announces "button" only)
  • Images with alt="image" or alt="logo.png"
  • SVG icons without aria-label or visually hidden text

Fix Recommendations:

  • Add meaningful alt text to images
  • Use aria-label or aria-labelledby for icon buttons
  • Use native semantic elements where possible (<button> instead of <div>)

SC 1.4.3 Contrast (Minimum) (Level AA)

Goal: Text must have sufficient contrast against its background.

Requirements:

  • Normal text: ≥ 4.5:1 contrast ratio
  • Large text (≥24px regular or ≥18.66px bold): ≥ 3:1 contrast ratio

Test Steps:

  1. Capture page states:

    capture_screenshot(sendToLLM=true)
    
    • Default state
    • Hover/focus states (use computer to trigger)
    • Error/disabled states if applicable
  2. Visual analysis prompt (for LLM):

    "Analyze this screenshot for text contrast issues. Identify any text that appears to have low contrast against its background. Focus on:

    • Small/body text that may be below 4.5:1
    • Placeholder text in input fields
    • Disabled state text
    • Text overlaid on images or gradients List suspicious elements with their approximate location."
  3. Record findings:

    • Note: Visual analysis provides estimates only
    • Flag elements for manual verification with contrast checker tools

Common Failures:

  • Light gray text on white backgrounds
  • Placeholder text with insufficient contrast
  • Text on image backgrounds without overlay

Fix Recommendations:

  • Increase text color darkness or background lightness
  • Add semi-transparent overlay behind text on images
  • Use contrast checker tools to verify exact ratios

SC 1.4.11 Non-text Contrast (Level AA)

Goal: UI components and graphical objects must have ≥ 3:1 contrast.

Applies to:

  • Input field borders
  • Button borders
  • Focus indicators
  • Icons conveying information
  • State indicators (checkboxes, toggles, radio buttons)

Test Steps:

  1. Identify UI components:

    search_elements(tabId, "textbox | combobox | checkbox | radio | switch | button | slider")
    
  2. Capture states:

    capture_screenshot(sendToLLM=true)
    
    • Document default, hover, focus, active, disabled states
  3. Visual analysis prompt:

    "Analyze this screenshot for non-text contrast issues. Check if:

    • Input field borders are clearly visible (≥3:1 against background)
    • Button boundaries are distinguishable
    • Icons are clearly visible
    • Focus indicators have sufficient contrast
    • Checkbox/radio/switch states are visually distinct List any elements that appear to have insufficient contrast."
  4. State coverage matrix: Document which states were tested for each component type.

Common Failures:

  • Light gray input borders on white backgrounds
  • Focus rings with low contrast
  • Icon-only buttons where icon color is too light

Fix Recommendations:

  • Increase border thickness and/or darkness
  • Ensure focus indicators have ≥3:1 contrast
  • Test all interactive states, not just default

SC 2.1.1 Keyboard (Level A)

Goal: All functionality must be operable via keyboard.

Test Steps:

  1. Identify key tasks: Ask user or determine primary interactive flows

  2. Attempt keyboard-only completion:

    computer(action='key', text='Tab')        // Navigate forward
    computer(action='key', text='shift+Tab')  // Navigate backward
    computer(action='key', text='Enter')      // Activate buttons/links
    computer(action='key', text='Space')      // Activate buttons, toggle checkboxes
    computer(action='key', text='Escape')     // Close dialogs/menus
    computer(action='key', text='ArrowDown')  // Navigate within widgets
    
  3. At each step:

    • Verify focus is visible (relates to 2.4.7)
    • Verify expected action occurs
    • If blocked, capture screenshot and note the element
  4. Cross-reference with accessibility tree:

    search_elements(tabId, "*")
    
    • Check if blocking element has appropriate role
    • Check if element is focusable / disabled

Common Failures:

  • Custom components using <div> with onClick but no keyboard handler
  • Drag-and-drop only interfaces without keyboard alternative
  • Focus not reaching all interactive elements

Fix Recommendations:

  • Use native interactive elements (<button>, <a>, <input>)
  • Add tabindex="0" and keyboard event handlers to custom components
  • Provide keyboard alternatives for mouse-only interactions

SC 2.1.2 No Keyboard Trap (Level A)

Goal: If focus can enter a component, it must be able to exit via keyboard.

Test Steps:

  1. Identify potential trap components:

    • Modals/dialogs
    • Dropdown menus
    • Rich text editors
    • Embedded iframes
    • Custom widgets
  2. For each component:

    • Tab into the component
    • Attempt to Tab out (forward and backward)
    • Attempt Escape to close (if applicable)
    • Document entry point and exit behavior
  3. Capture evidence:

    capture_screenshot(sendToLLM=true)
    
    • Screenshot when entering
    • Screenshot showing focus location
    • Note if any special keys are required and whether instructions are provided

Common Failures:

  • Modal dialogs that don't close on Escape
  • Focus getting "stuck" in embedded content
  • Custom dropdown menus that don't release focus

Fix Recommendations:

  • Implement focus trapping in modals with Escape to close
  • Ensure all custom widgets have documented exit mechanism
  • Provide visible instructions if non-standard keys are required

SC 2.4.3 Focus Order (Level A)

Goal: Focus order must preserve meaning and operability.

Test Steps:

  1. Tab through the page:

    computer(action='key', text='Tab')
    
    • Document the sequence of focused elements
  2. Compare with visual order:

    capture_screenshot(sendToLLM=true)
    
    • Focus sequence should match left-to-right, top-to-bottom reading order
    • Related elements should be adjacent in focus order
  3. Test dynamic content:

    • Open a modal → focus should move to modal
    • Close modal → focus should return to trigger or logical position
    • Expand accordion → new content should be reachable
    • Show error message → focus should move to or near error
  4. Check for problematic patterns:

    search_elements(tabId, "tabindex")
    
    • Positive tabindex values (>0) cause unpredictable

Content truncated.

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

9521,094

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

846846

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

571700

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

548492

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

673466

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

514280

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.