axiom-ui-testing
Use when writing UI tests, recording interactions, tests have race conditions, timing dependencies, inconsistent pass/fail behavior, or XCTest UI tests are flaky - covers Recording UI Automation (WWDC 2025), condition-based waiting, network conditioning, multi-factor testing, crash debugging, and accessibility-first testing patterns
Install
mkdir -p .claude/skills/axiom-ui-testing && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6775" && unzip -o skill.zip -d .claude/skills/axiom-ui-testing && rm skill.zipInstalls to .claude/skills/axiom-ui-testing
About this skill
UI Testing
Overview
Wait for conditions, not arbitrary timeouts. Core principle Flaky tests come from guessing how long operations take. Condition-based waiting eliminates race conditions.
NEW in WWDC 2025: Recording UI Automation allows you to record interactions, replay across devices/languages, and review video recordings of test runs.
Example Prompts
These are real questions developers ask that this skill is designed to answer:
1. "My UI tests pass locally on my Mac but fail in CI. How do I make them more reliable?"
→ The skill shows condition-based waiting patterns that work across devices/speeds, eliminating CI timing differences
2. "My tests use sleep(2) and sleep(5) but they're still flaky. How do I replace arbitrary timeouts with real conditions?"
→ The skill demonstrates waitForExistence, XCTestExpectation, and polling patterns for data loads, network requests, and animations
3. "I just recorded a test using Xcode 26's Recording UI Automation. How do I review the video and debug failures?"
→ The skill covers Video Debugging workflows to analyze recordings and find the exact step where tests fail
4. "My test is failing on iPad but passing on iPhone. How do I write tests that work across all device sizes?"
→ The skill explains multi-factor testing strategies and device-independent predicates for robust cross-device testing
5. "I want to write tests that are not flaky. What are the critical patterns I need to know?"
→ The skill provides condition-based waiting templates, accessibility-first patterns, and the decision tree for reliable test architecture
Red Flags — Test Reliability Issues
If you see ANY of these, suspect timing issues:
- Tests pass locally, fail in CI (timing differences)
- Tests sometimes pass, sometimes fail (race conditions)
- Tests use
sleep()orThread.sleep()(arbitrary delays) - Tests fail with "UI element not found" then pass on retry
- Long test runs (waiting for worst-case scenarios)
Quick Decision Tree
Test failing?
├─ Element not found?
│ └─ Use waitForExistence(timeout:) not sleep()
├─ Passes locally, fails CI?
│ └─ Replace sleep() with condition polling
├─ Animation causing issues?
│ └─ Wait for animation completion, don't disable
└─ Network request timing?
└─ Use XCTestExpectation or waitForExistence
Core Pattern: Condition-Based Waiting
❌ WRONG (Arbitrary Timeout):
func testButtonAppears() {
app.buttons["Login"].tap()
sleep(2) // ❌ Guessing it takes 2 seconds
XCTAssertTrue(app.buttons["Dashboard"].exists)
}
✅ CORRECT (Wait for Condition):
func testButtonAppears() {
app.buttons["Login"].tap()
let dashboard = app.buttons["Dashboard"]
XCTAssertTrue(dashboard.waitForExistence(timeout: 5))
}
Common UI Testing Patterns
Pattern 1: Waiting for Elements
// Wait for element to appear
func waitForElement(_ element: XCUIElement, timeout: TimeInterval = 5) -> Bool {
return element.waitForExistence(timeout: timeout)
}
// Usage
XCTAssertTrue(waitForElement(app.buttons["Submit"]))
Pattern 2: Waiting for Element to Disappear
func waitForElementToDisappear(_ element: XCUIElement, timeout: TimeInterval = 5) -> Bool {
let predicate = NSPredicate(format: "exists == false")
let expectation = XCTNSPredicateExpectation(predicate: predicate, object: element)
let result = XCTWaiter().wait(for: [expectation], timeout: timeout)
return result == .completed
}
// Usage
XCTAssertTrue(waitForElementToDisappear(app.activityIndicators["Loading"]))
Pattern 3: Waiting for Specific State
func waitForButton(_ button: XCUIElement, toBeEnabled enabled: Bool, timeout: TimeInterval = 5) -> Bool {
let predicate = NSPredicate(format: "isEnabled == %@", NSNumber(value: enabled))
let expectation = XCTNSPredicateExpectation(predicate: predicate, object: button)
let result = XCTWaiter().wait(for: [expectation], timeout: timeout)
return result == .completed
}
// Usage
let submitButton = app.buttons["Submit"]
XCTAssertTrue(waitForButton(submitButton, toBeEnabled: true))
submitButton.tap()
Pattern 4: Accessibility Identifiers
Set in app:
Button("Submit") {
// action
}
.accessibilityIdentifier("submitButton")
Use in tests:
func testSubmitButton() {
let submitButton = app.buttons["submitButton"] // Uses identifier, not label
XCTAssertTrue(submitButton.waitForExistence(timeout: 5))
submitButton.tap()
}
Why: Accessibility identifiers don't change with localization, remain stable across UI updates.
Pattern 5: Network Request Delays
func testDataLoads() {
app.buttons["Refresh"].tap()
// Wait for loading indicator to disappear
let loadingIndicator = app.activityIndicators["Loading"]
XCTAssertTrue(waitForElementToDisappear(loadingIndicator, timeout: 10))
// Now verify data loaded
XCTAssertTrue(app.cells.count > 0)
}
Pattern 6: Animation Handling
func testAnimatedTransition() {
app.buttons["Next"].tap()
// Wait for destination view to appear
let destinationView = app.otherElements["DestinationView"]
XCTAssertTrue(destinationView.waitForExistence(timeout: 2))
// Optional: Wait a bit more for animation to settle
// Only if absolutely necessary
RunLoop.current.run(until: Date(timeIntervalSinceNow: 0.3))
}
Testing Checklist
Before Writing Tests
- Use accessibility identifiers for all interactive elements
- Avoid hardcoded labels (use identifiers instead)
- Plan for network delays and animations
- Choose appropriate timeouts (2s UI, 10s network)
When Writing Tests
- Use
waitForExistence()notsleep() - Use predicates for complex conditions
- Test both success and failure paths
- Make tests independent (can run in any order)
After Writing Tests
- Run tests 10 times locally (catch flakiness)
- Run tests on slowest supported device
- Run tests in CI environment
- Check test duration (if >30s per test, optimize)
Xcode UI Testing Tips
Launch Arguments for Testing
func testExample() {
let app = XCUIApplication()
app.launchArguments = ["UI-Testing"]
app.launch()
}
In app code:
if ProcessInfo.processInfo.arguments.contains("UI-Testing") {
// Use mock data, skip onboarding, etc.
}
Faster Test Execution
override func setUpWithError() throws {
continueAfterFailure = false // Stop on first failure
}
Debugging Failing Tests
func testExample() {
// Take screenshot on failure
addUIInterruptionMonitor(withDescription: "Alert") { alert in
alert.buttons["OK"].tap()
return true
}
// Print element hierarchy
print(app.debugDescription)
}
Common Mistakes
❌ Using sleep() for Everything
sleep(5) // ❌ Wastes time if operation completes in 1s
❌ Not Handling Animations
app.buttons["Next"].tap()
XCTAssertTrue(app.buttons["Back"].exists) // ❌ May fail during animation
❌ Hardcoded Text Labels
app.buttons["Submit"].tap() // ❌ Breaks with localization
❌ Tests Depend on Each Other
// ❌ Test 2 assumes Test 1 ran first
func test1_Login() { /* ... */ }
func test2_ViewDashboard() { /* assumes logged in */ }
❌ No Timeout Strategy
element.waitForExistence(timeout: 100) // ❌ Too long
element.waitForExistence(timeout: 0.1) // ❌ Too short
Use appropriate timeouts:
- UI animations: 2-3 seconds
- Network requests: 10 seconds
- Complex operations: 30 seconds max
Real-World Impact
Before (using sleep()):
- Test suite: 15 minutes (waiting for worst-case)
- Flaky tests: 20% failure rate
- CI failures: 50% require retry
After (condition-based waiting):
- Test suite: 5 minutes (waits only as needed)
- Flaky tests: <2% failure rate
- CI failures: <5% require retry
Key insight Tests finish faster AND are more reliable when waiting for actual conditions instead of guessing times.
Recording UI Automation
Overview
NEW in Xcode 26: Record, replay, and review UI automation tests with video recordings.
Three Phases:
- Record — Capture interactions (taps, swipes, hardware button presses) as Swift code
- Replay — Run across multiple devices, languages, regions, orientations
- Review — Watch video recordings, analyze failures, view UI element overlays
Supported Platforms: iOS, iPadOS, macOS, watchOS, tvOS, axiom-visionOS (Designed for iPad)
How UI Automation Works
Key Principles:
- UI automation interacts with your app as a person does using gestures and hardware events
- Runs completely independently from your app (app models/data not directly accessible)
- Uses accessibility framework as underlying technology
- Tells OS which gestures to perform, then waits for completion synchronously one at a time
Actions include:
- Launching your app
- Interacting with buttons and navigation
- Setting system state (Dark Mode, axiom-localization, etc.)
- Setting simulated location
Accessibility is the Foundation
Critical Understanding: Accessibility provides information directly to UI automation.
What accessibility sees:
- Element types (button, text, image, etc.)
- Labels (visible text)
- Values (current state for checkboxes, etc.)
- Frames (element positions)
- Identifiers (accessibility identifiers — NOT localized)
Best Practice: Great accessibility experience = great UI automation experience.
Preparing Your App for Recording
Step 1: Add Accessibility Identifiers
SwiftUI:
Button("Submit") {
// action
}
.accessibilityIdentifier("submitButton")
// Make identifiers specific to instance
List(landmarks) { landmark in
LandmarkRow(landmark)
.accessibilityIden
---
*Content truncated.*
More by CharlesWiltgen
View all skills by CharlesWiltgen →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversStreamline browser automation with Playwright MCP. Capture interactions, screenshots, and generate test scripts for effi
Playwright Recorder enables browser automation and playwright testing with visual workflows, capturing interactions for
Easily manage iOS devices, automate tests, and install apps with iOS Development Bridge (idb). Ideal for barcode scannin
Integrate with Harvest for easy time recording, project management, and hour tracking using natural language commands.
Enhance software testing with Playwright MCP: Fast, reliable browser automation, an innovative alternative to Selenium s
Extend your developer tools with GitHub MCP Server for advanced automation, supporting GitHub Student and student packag
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.