filler-word-processing

3
0
Source

Process filler word annotations to generate video edit lists. Use when working with timestamp annotations for removing speech disfluencies (um, uh, like, you know) from audio/video content.

Install

mkdir -p .claude/skills/filler-word-processing && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4409" && unzip -o skill.zip -d .claude/skills/filler-word-processing && rm skill.zip

Installs to .claude/skills/filler-word-processing

About this skill

Filler Word Processing

Annotation Format

Typical annotation JSON structure:

[
  {"word": "um", "timestamp": 12.5},
  {"word": "like", "timestamp": 25.3},
  {"word": "you know", "timestamp": 45.8}
]

Converting Annotations to Cut Segments

Each filler word annotation marks when the word starts. To remove it, use word-specific durations since different fillers have different lengths:

import json

# Word-specific durations (in seconds)
WORD_DURATIONS = {
    "uh": 0.3,
    "um": 0.4,
    "hum": 0.6,
    "hmm": 0.6,
    "mhm": 0.55,
    "like": 0.3,
    "yeah": 0.35,
    "so": 0.25,
    "well": 0.35,
    "okay": 0.4,
    "basically": 0.55,
    "you know": 0.55,
    "i mean": 0.5,
    "kind of": 0.5,
    "i guess": 0.5,
}
DEFAULT_DURATION = 0.4

def annotations_to_segments(annotations_file, buffer=0.05):
    """
    Convert filler word annotations to (start, end) cut segments.

    Args:
        annotations_file: Path to JSON annotations
        buffer: Small buffer before the word (seconds)

    Returns:
        List of (start, end) tuples representing segments to remove
    """
    with open(annotations_file) as f:
        annotations = json.load(f)

    segments = []
    for ann in annotations:
        word = ann.get('word', '').lower().strip()
        timestamp = ann['timestamp']
        # Use word-specific duration, fall back to default
        word_duration = WORD_DURATIONS.get(word, DEFAULT_DURATION)
        # Cut starts slightly before the word
        start = max(0, timestamp - buffer)
        # Cut ends after word duration
        end = timestamp + word_duration
        segments.append((start, end))

    return segments

Merging Overlapping Segments

When filler words are close together, merge their cut segments:

def merge_overlapping_segments(segments, min_gap=0.1):
    """
    Merge segments that overlap or are very close together.

    Args:
        segments: List of (start, end) tuples
        min_gap: Minimum gap to keep segments separate

    Returns:
        Merged list of segments
    """
    if not segments:
        return []

    # Sort by start time
    sorted_segs = sorted(segments)
    merged = [sorted_segs[0]]

    for start, end in sorted_segs[1:]:
        prev_start, prev_end = merged[-1]

        # If this segment overlaps or is very close to previous
        if start <= prev_end + min_gap:
            # Extend the previous segment
            merged[-1] = (prev_start, max(prev_end, end))
        else:
            merged.append((start, end))

    return merged

Complete Processing Pipeline

def process_filler_annotations(annotations_file, word_duration=0.4):
    """Full pipeline: load annotations -> create segments -> merge overlaps"""

    # Load and create initial segments
    segments = annotations_to_segments(annotations_file, word_duration)

    # Merge overlapping cuts
    merged = merge_overlapping_segments(segments)

    return merged

Tuning Parameters

ParameterTypical ValueNotes
word_durationvariesShort fillers (um, uh) ~0.25-0.3s, single words (like, yeah) ~0.3-0.4s, phrases (you know, i mean) ~0.5-0.6s
buffer0.05sSmall buffer captures word onset
min_gap0.1sPrevents micro-segments between close fillers

Word Duration Guidelines

CategoryWordsDuration
Quick hesitationsuh, um0.3-0.4s
Sustained hums (drawn out while thinking)hum, hmm, mhm0.55-0.6s
Quick single wordslike, yeah, so, well0.25-0.35s
Longer single wordsokay, basically0.4-0.55s
Multi-word phrasesyou know, i mean, kind of, i guess0.5-0.55s

Quality Considerations

  • Too aggressive: Cuts into adjacent words, sounds choppy
  • Too conservative: Filler words partially audible
  • Sweet spot: Clean cuts with natural-sounding result

Test with a few samples before processing full video.

latex-writing

benchflow-ai

Guide LaTeX document authoring following best practices and proper semantic markup. Use proactively when: (1) writing or editing .tex files, (2) writing or editing .nw literate programming files, (3) literate-programming skill is active and working with .nw files, (4) user mentions LaTeX, BibTeX, or document formatting, (5) reviewing LaTeX code quality. Ensures proper use of semantic environments (description vs itemize), csquotes (\enquote{} not ``...''), and cleveref (\cref{} not \S\ref{}).

4935

geospatial-analysis

benchflow-ai

Analyze geospatial data using geopandas with proper coordinate projections. Use when calculating distances between geographic features, performing spatial filtering, or working with plate boundaries and earthquake data.

287

pytorch

benchflow-ai

Building and training neural networks with PyTorch. Use when implementing deep learning models, training loops, data pipelines, model optimization with torch.compile, distributed training, or deploying PyTorch models.

305

search-flights

benchflow-ai

Search flights by origin, destination, and departure date using the bundled flights dataset. Use this skill when proposing flight options or checking whether a route/date combination exists.

214

d3js-visualization

benchflow-ai

Build deterministic, verifiable data visualizations with D3.js (v6). Generate standalone HTML/SVG (and optional PNG) from local data files without external network dependencies. Use when tasks require charts, plots, axes/scales, legends, tooltips, or data-driven SVG output.

174

deep-learning

benchflow-ai

PyTorch, TensorFlow, neural networks, CNNs, transformers, and deep learning for production

83

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.