axiom-vision

0
0
Source

subject segmentation, VNGenerateForegroundInstanceMaskRequest, isolate object from hand, VisionKit subject lifting, image foreground detection, instance masks, class-agnostic segmentation, VNRecognizeTextRequest, OCR, VNDetectBarcodesRequest, DataScannerViewController, document scanning, RecognizeDocumentsRequest

Install

mkdir -p .claude/skills/axiom-vision && curl -L -o skill.zip "https://mcp.directory/api/skills/download/9099" && unzip -o skill.zip -d .claude/skills/axiom-vision && rm skill.zip

Installs to .claude/skills/axiom-vision

About this skill

Computer Vision

You MUST use this skill for ANY computer vision work using the Vision framework.

Quick Reference

Symptom / TaskReference
Subject segmentation, liftingSee skills/vision-framework.md
Hand/body pose detectionSee skills/vision-framework.md
Text recognition (OCR)See skills/vision-framework.md
Barcode/QR code detectionSee skills/vision-framework.md
Document scanningSee skills/vision-framework.md
DataScannerViewControllerSee skills/vision-framework.md
Structured document extraction (iOS 26+)See skills/vision-framework.md
Isolate object excluding handSee skills/vision-framework.md
Vision framework API referenceSee skills/vision-ref.md
Visual Intelligence integration (iOS 26+)See skills/vision-ref.md
Subject not detectedSee skills/vision-diag.md
Hand/body pose missing landmarksSee skills/vision-diag.md
Low confidence observationsSee skills/vision-diag.md
UI freezing during processingSee skills/vision-diag.md
Coordinate conversion bugsSee skills/vision-diag.md
Text not recognized / wrong charsSee skills/vision-diag.md
Barcode not detectedSee skills/vision-diag.md
DataScanner blank / no itemsSee skills/vision-diag.md
Document edges not detectedSee skills/vision-diag.md

Decision Tree

digraph vision {
    start [label="Computer vision task" shape=ellipse];
    what [label="What do you need?" shape=diamond];

    start -> what;
    what -> "skills/vision-framework.md" [label="implement feature"];
    what -> "skills/vision-ref.md" [label="API reference"];
    what -> "skills/vision-ref.md" [label="Visual Intelligence"];
    what -> "skills/vision-diag.md" [label="something broken"];
}
  1. Implementing (pose, segmentation, OCR, barcodes, documents, live scanning)? → skills/vision-framework.md
  2. Visual Intelligence system integration (camera feature, iOS 26+)? → skills/vision-ref.md (Visual Intelligence section)
  3. Need API reference / code examples? → skills/vision-ref.md
  4. Debugging issues (detection failures, confidence, coordinates)? → skills/vision-diag.md

Critical Patterns

Implementation (skills/vision-framework.md):

  • Decision tree for choosing the right Vision API
  • Subject segmentation with VisionKit
  • Isolating objects while excluding hands (combining APIs)
  • Hand/body pose detection (21/18 landmarks)
  • Text recognition (fast vs accurate modes)
  • Barcode detection with symbology selection
  • Document scanning and structured extraction (iOS 26+)
  • Live scanning with DataScannerViewController
  • CoreImage HDR compositing

Diagnostics (skills/vision-diag.md):

  • Subject detection failures (edge of frame, lighting)
  • Landmark tracking issues (confidence thresholds)
  • Performance optimization (frame skipping, downscaling)
  • Coordinate conversion (lower-left vs top-left origin)
  • Text recognition failures (language, contrast)
  • Barcode detection issues (symbology, size, glare)
  • DataScanner troubleshooting (availability, data types)

Anti-Rationalization

ThoughtReality
"Vision framework is just a request/handler pattern"Vision has coordinate conversion, confidence thresholds, and performance gotchas. vision-framework.md covers them.
"I'll handle text recognition without the skill"VNRecognizeTextRequest has fast/accurate modes and language-specific settings. vision-framework.md has the patterns.
"Subject segmentation is straightforward"Instance masks have HDR compositing and hand-exclusion patterns. vision-framework.md covers complex scenarios.
"Visual Intelligence is just the camera API"Visual Intelligence is a system-level feature requiring IntentValueQuery and SemanticContentDescriptor. vision-ref.md has the integration section.
"I'll just process on the main thread"Vision blocks UI on older devices. Users on iPhone 12 will experience frozen app. 15 min to add background queue.

Example Invocations

User: "How do I detect hand pose in an image?" → See skills/vision-framework.md

User: "Isolate a subject but exclude the user's hands" → See skills/vision-framework.md

User: "How do I read text from an image?" → See skills/vision-framework.md

User: "Scan QR codes with the camera" → See skills/vision-framework.md

User: "Subject detection isn't working" → See skills/vision-diag.md

User: "Text recognition returns wrong characters" → See skills/vision-diag.md

User: "Show me VNDetectHumanBodyPoseRequest examples" → See skills/vision-ref.md

User: "How do I make my app work with Visual Intelligence?" → See skills/vision-ref.md

User: "RecognizeDocumentsRequest API reference" → See skills/vision-ref.md

axiom-ios-build

CharlesWiltgen

Use when ANY iOS build fails, test crashes, Xcode misbehaves, or environment issue occurs before debugging code. Covers build failures, compilation errors, dependency conflicts, simulator problems, environment-first diagnostics.

243

axiom-camera-capture-ref

CharlesWiltgen

Reference — AVCaptureSession, AVCapturePhotoSettings, AVCapturePhotoOutput, RotationCoordinator, photoQualityPrioritization, deferred processing, AVCaptureMovieFileOutput, session presets, capture device APIs

42

axiom-swiftdata

CharlesWiltgen

Use when working with SwiftData - @Model definitions, @Query in SwiftUI, @Relationship macros, ModelContext patterns, CloudKit integration, iOS 26+ features, and Swift 6 concurrency with @MainActor — Apple's native persistence framework

12

axiom-swiftui-nav-diag

CharlesWiltgen

Use when debugging navigation not responding, unexpected pops, deep links showing wrong screen, state lost on tab switch or background, crashes in navigationDestination, or any SwiftUI navigation failure - systematic diagnostics with production crisis defense

22

axiom-ios-vision

CharlesWiltgen

Use when implementing ANY computer vision feature - image analysis, object detection, pose detection, person segmentation, subject lifting, hand/body pose tracking.

22

axiom-haptics

CharlesWiltgen

Use when implementing haptic feedback, Core Haptics patterns, audio-haptic synchronization, or debugging haptic issues - covers UIFeedbackGenerator, CHHapticEngine, AHAP patterns, and Apple's Causality-Harmony-Utility design principles from WWDC 2021

31

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,4071,302

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

1,2201,024

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

9001,013

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

958658

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

970608

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

1,033496

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.