sound-engineer

86
7
Source

Expert audio engineer specializing in spatial audio, procedural sound design, interactive audio systems, and real-time DSP

Install

mkdir -p .claude/skills/sound-engineer && curl -L -o skill.zip "https://mcp.directory/api/skills/download/273" && unzip -o skill.zip -d .claude/skills/sound-engineer && rm skill.zip

Installs to .claude/skills/sound-engineer

About this skill

Sound Engineer: Spatial Audio, Procedural Sound & App UX Audio

Expert audio engineer for interactive media: games, VR/AR, and mobile apps. Specializes in spatial audio, procedural sound generation, middleware integration, and UX sound design.

When to Use This Skill

Use for:

  • Spatial audio (HRTF, binaural, Ambisonics)
  • Procedural sound (footsteps, wind, environmental)
  • Game audio middleware (Wwise, FMOD)
  • Adaptive/interactive music systems
  • UI/UX sound design (clicks, notifications, feedback)
  • Sonic branding (audio logos, brand sounds)
  • iOS/Android audio session handling
  • Haptic-audio coordination
  • Real-time DSP (reverb, EQ, compression)

Do NOT use for:

  • Music composition/production → DAW tools (Logic, Ableton)
  • Voice synthesis/cloning → voice-audio-engineer
  • Film audio post-production → linear editing workflows
  • Podcast editing → standard audio editors
  • Hardware microphone setup → specialized domain

MCP Integrations

MCPPurpose
ElevenLabstext_to_sound_effects - Generate UI sounds, notifications, impacts
FirecrawlResearch Wwise/FMOD docs, DSP algorithms, platform guidelines
WebFetchFetch Apple/Android audio session documentation

Expert vs Novice Shibboleths

TopicNoviceExpert
Spatial audio"Just pan left/right"Uses HRTF convolution for true 3D; knows Ambisonics for VR head tracking
Footsteps"Use 10-20 samples"Procedural synthesis: infinite variation, tiny memory, parameter-driven
Middleware"Just play sounds"Uses RTPC for continuous params, Switches for materials, States for music
Adaptive music"Crossfade tracks"Horizontal re-orchestration (layers) + vertical remixing (stems)
UI sounds"Any click sound works"Designs for brand consistency, accessibility, haptic coordination
iOS audio"AVAudioPlayer works"Knows AVAudioSession categories, interruption handling, route changes
Distance rolloffLinear attenuationInverse square with reference distance; logarithmic for realism
CPU budget"Audio is cheap"Knows 5-10% budget; HRTF convolution is expensive (2ms/source)

Common Anti-Patterns

Anti-Pattern: Sample-Based Footsteps at Scale

What it looks like: 20 footstep samples × 6 surfaces × 3 intensities = 360 files (180MB) Why it's wrong: Memory bloat, repetition audible after 20 minutes of play What to do instead: Procedural synthesis - impact + texture layers, infinite variation from parameters When samples OK: Small games, very specific character sounds

Anti-Pattern: HRTF for Every Sound

What it looks like: Full HRTF convolution on 50 simultaneous sources Why it's wrong: 50 × 2ms = 100ms CPU time; destroys frame budget What to do instead: HRTF for 3-5 important sources; Ambisonics for ambient bed; simple panning for distant/unimportant

Anti-Pattern: Ignoring Audio Sessions (Mobile)

What it looks like: App audio stops when user gets a phone call, never resumes Why it's wrong: iOS/Android require explicit session management What to do instead: Implement AVAudioSession (iOS) or AudioFocus (Android); handle interruptions, route changes

Anti-Pattern: Hard-Coded Sounds

What it looks like: PlaySound("footstep_concrete_01.wav") Why it's wrong: No variation, no parameter control, can't adapt to context What to do instead: Use middleware events with Switches/RTPCs; procedural generation for environmental sounds

Anti-Pattern: Loud UI Sounds

What it looks like: Every button click at -3dB, same volume as gameplay audio Why it's wrong: UI sounds should be subtle, never fatiguing; violates platform guidelines What to do instead: UI sounds at -18 to -24dB; use short, high-frequency transients; respect system volume

Evolution Timeline

Pre-2010: Fixed Audio

  • Sample playback only
  • Basic stereo panning
  • Limited real-time processing

2010-2015: Middleware Era

  • Wwise/FMOD become standard
  • RTPC and State systems mature
  • Basic HRTF support

2016-2020: VR Audio Revolution

  • Ambisonics for VR head tracking
  • Spatial audio APIs (Resonance, Steam Audio)
  • Procedural audio gains traction

2021-2024: AI & Mobile

  • ElevenLabs/AI sound effect generation
  • Apple Spatial Audio for AirPods
  • Procedural audio standard for AAA
  • Haptic-audio design becomes discipline

2025+: Current Best Practices

  • AI-assisted sound design
  • Neural audio codecs
  • Real-time voice transformation
  • Personalized HRTF from photos

Core Concepts

Spatial Audio Approaches

ApproachCPU CostQualityUse Case
Stereo panning~0.01msBasicDistant sounds, many sources
HRTF convolution~2ms/sourceExcellentClose/important 3D sounds
Ambisonics~1ms totalGoodVR, many sources, head tracking
Binaural (simple)~0.1ms/sourceDecentBudget/mobile spatial

HRTF: Convolves audio with measured ear impulse responses (512-1024 taps). Creates convincing 3D positioning including elevation.

Ambisonics: Encodes sound field as spherical harmonics (W,X,Y,Z for 1st order). Rotation-invariant, efficient for many sources.

// Key insight: encode once, rotate cheaply
AmbisonicSignal encode(mono_input, direction) {
    return {
        mono * 0.707f,      // W (omnidirectional)
        mono * direction.x, // X (front-back)
        mono * direction.y, // Y (left-right)
        mono * direction.z  // Z (up-down)
    };
}

Procedural Footsteps

Why procedural beats samples:

  • ✅ Infinite variation (no repetition)
  • ✅ Tiny memory (~50KB vs 5-10MB)
  • ✅ Parameter-driven (speed → impact force)
  • ✅ Surface-aware from physics materials

Core synthesis:

  1. Impact burst (20ms noise + resonant tone)
  2. Surface texture (gravel = granular, grass = filtered noise)
  3. Debris (scattered micro-impacts)
  4. Surface EQ (metal = bright, grass = muffled)
// Surface resonance frequencies (expert knowledge)
float get_resonance(Surface s) {
    switch(s) {
        case Concrete: return 150.0f;  // Low, dull
        case Wood:     return 250.0f;  // Mid, warm
        case Metal:    return 500.0f;  // High, ringing
        case Gravel:   return 300.0f;  // Crunchy mid
        default:       return 200.0f;
    }
}

Wwise/FMOD Integration

Key abstractions:

  • Events: Trigger sounds (footstep, explosion, ambient loop)
  • RTPC: Continuous parameters (speed 0-100, health 0-1)
  • Switches: Discrete choices (surface type, weapon type)
  • States: Global context (music intensity, underwater)
// Material-aware footsteps via Wwise
void OnFootDown(FHitResult& hit) {
    FString surface = DetectSurface(hit.PhysMaterial);
    float speed = GetVelocity().Size();

    SetSwitch("Surface", surface, this);        // Concrete/Wood/Metal
    SetRTPCValue("Impact_Force", speed/600.0f); // 0-1 normalized
    PostEvent(FootstepEvent, this);
}

UI/UX Sound Design

Principles for app sounds:

  1. Subtle - UI sounds at -18 to -24dB
  2. Short - 50-200ms for most interactions
  3. Consistent - Same family/timbre across app
  4. Accessible - Don't rely solely on audio for feedback
  5. Haptic-paired - iOS haptics should match audio characteristics

Sound types:

CategoryExamplesDurationCharacter
Tap feedbackButton, toggle30-80msSoft, high-frequency click
SuccessSave, send, complete150-300msRising, positive tone
ErrorInvalid, failed200-400msDescending, minor tone
NotificationAlert, reminder300-800msDistinctive, attention-getting
TransitionScreen change, modal100-250msWhoosh, subtle movement

iOS/Android Audio Sessions

iOS AVAudioSession categories:

  • .ambient - Mixes with other audio, silenced by ringer
  • .playback - Interrupts other audio, ignores ringer
  • .playAndRecord - For voice apps
  • .soloAmbient - Default, silences other audio

Critical handlers:

  • Interruption (phone call)
  • Route change (headphones unplugged)
  • Secondary audio (Siri)
// Proper iOS audio session setup
func configureAudioSession() {
    let session = AVAudioSession.sharedInstance()
    try? session.setCategory(.playback, mode: .default, options: [.mixWithOthers])
    try? session.setActive(true)

    NotificationCenter.default.addObserver(
        self,
        selector: #selector(handleInterruption),
        name: AVAudioSession.interruptionNotification,
        object: nil
    )
}

Performance Targets

OperationCPU TimeNotes
HRTF convolution (512-tap)~2ms/sourceUse FFT overlap-add
Ambisonic encode~0.1ms/sourceVery efficient
Ambisonic decode (binaural)~1ms totalSupports many sources
Procedural footstep~1-2msvs 500KB per sample
Wind synthesis~0.5ms/frameReal-time streaming
Wwise event post<0.1msNegligible
iOS audio callback5-10ms budgetAt 48kHz/512 samples

Budget guideline: Audio should use 5-10% of frame time.

Quick Reference

Spatial Audio Decision Tree

  • VR with head tracking? → Ambisonics
  • Few important sources? → Full HRTF
  • Many background sources? → Simple panning + distance rolloff
  • Mobile with limited CPU? → Binaural (simple) or panning

When to Use Procedural Audio

  • Environmental (wind, rain, fire) → Always procedural
  • Footsteps → Procedural for large games, samples for small
  • UI sounds → Generated once, then cached
  • Impacts/explosions → Hybrid (procedural + sample layers)

Platform Audio Sessions

  • Game with music: .ambient + mixWithOthers
  • Meditation/focus app: .playback (interrupt music)
  • Voice chat: .playAndRecord
  • Video player: .playback

Integrates With

  • voice-audio-engineer - Voice synthesis and TTS
  • vr-avatar-engineer - VR audio + avatar integration
  • metal-shader-expert - GPU audio processing
  • native-app-designer - App UI sound integration

For detailed implementations: See /references/implementations.md

Remember: Great audio is invisible—players feel it, don't notice it. Focus on supporting the experience, not showing off. Procedural audio saves memory and eliminates repetition. Always respect CPU budgets and platform audio session requirements.

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

284790

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

212415

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

204286

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

216234

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

169197

rust-coding-skill

UtakataKyosui

Guides Claude in writing idiomatic, efficient, well-structured Rust code using proper data modeling, traits, impl organization, macros, and build-speed best practices.

165173

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.