whisper-transcription

4
0
Source

Transcribe audio/video to text with word-level timestamps using OpenAI Whisper. Use when you need speech-to-text with accurate timing information for each word.

Install

mkdir -p .claude/skills/whisper-transcription && curl -L -o skill.zip "https://mcp.directory/api/skills/download/3254" && unzip -o skill.zip -d .claude/skills/whisper-transcription && rm skill.zip

Installs to .claude/skills/whisper-transcription

About this skill

Whisper Transcription

OpenAI Whisper provides accurate speech-to-text with word-level timestamps.

Installation

pip install openai-whisper

Model Selection

Use the tiny model for fast transcription - it's sufficient for most tasks and runs much faster:

ModelSizeSpeedAccuracy
tiny39 MBFastestGood for clear speech
base74 MBFastBetter accuracy
small244 MBMediumHigh accuracy

Recommendation: Start with tiny - it handles clear interview/podcast audio well.

Basic Usage with Word Timestamps

import whisper
import json

def transcribe_with_timestamps(audio_path, output_path):
    """
    Transcribe audio and get word-level timestamps.

    Args:
        audio_path: Path to audio/video file
        output_path: Path to save JSON output
    """
    # Use tiny model for speed
    model = whisper.load_model("tiny")

    # Transcribe with word timestamps
    result = model.transcribe(
        audio_path,
        word_timestamps=True,
        language="en"  # Specify language for better accuracy
    )

    # Extract words with timestamps
    words = []
    for segment in result["segments"]:
        if "words" in segment:
            for word_info in segment["words"]:
                words.append({
                    "word": word_info["word"].strip(),
                    "start": word_info["start"],
                    "end": word_info["end"]
                })

    with open(output_path, "w") as f:
        json.dump(words, f, indent=2)

    return words

Detecting Specific Words

def find_words(transcription, target_words):
    """
    Find specific words in transcription with their timestamps.

    Args:
        transcription: List of word dicts with 'word', 'start', 'end'
        target_words: Set of words to find (lowercase)

    Returns:
        List of matches with word and timestamp
    """
    matches = []
    target_lower = {w.lower() for w in target_words}

    for item in transcription:
        word = item["word"].lower().strip()
        # Remove punctuation for matching
        clean_word = ''.join(c for c in word if c.isalnum())

        if clean_word in target_lower:
            matches.append({
                "word": clean_word,
                "timestamp": item["start"]
            })

    return matches

Complete Example: Find Filler Words

import whisper
import json

# Filler words to detect
FILLER_WORDS = {
    "um", "uh", "hum", "hmm", "mhm",
    "like", "so", "well", "yeah", "okay",
    "basically", "actually", "literally"
}

def detect_fillers(audio_path, output_path):
    # Load tiny model (fast!)
    model = whisper.load_model("tiny")

    # Transcribe
    result = model.transcribe(audio_path, word_timestamps=True, language="en")

    # Find fillers
    fillers = []
    for segment in result["segments"]:
        for word_info in segment.get("words", []):
            word = word_info["word"].lower().strip()
            clean = ''.join(c for c in word if c.isalnum())

            if clean in FILLER_WORDS:
                fillers.append({
                    "word": clean,
                    "timestamp": round(word_info["start"], 2)
                })

    with open(output_path, "w") as f:
        json.dump(fillers, f, indent=2)

    return fillers

# Usage
detect_fillers("/root/input.mp4", "/root/annotations.json")

Audio Extraction (if needed)

Whisper can process video files directly, but for cleaner results:

# Extract audio as 16kHz mono WAV
ffmpeg -i input.mp4 -vn -acodec pcm_s16le -ar 16000 -ac 1 audio.wav

Multi-Word Phrases

For detecting phrases like "you know" or "I mean":

def find_phrases(transcription, phrases):
    """Find multi-word phrases in transcription."""
    matches = []
    words = [w["word"].lower().strip() for w in transcription]

    for phrase in phrases:
        phrase_words = phrase.lower().split()
        phrase_len = len(phrase_words)

        for i in range(len(words) - phrase_len + 1):
            if words[i:i+phrase_len] == phrase_words:
                matches.append({
                    "word": phrase,
                    "timestamp": transcription[i]["start"]
                })

    return matches

latex-writing

benchflow-ai

Guide LaTeX document authoring following best practices and proper semantic markup. Use proactively when: (1) writing or editing .tex files, (2) writing or editing .nw literate programming files, (3) literate-programming skill is active and working with .nw files, (4) user mentions LaTeX, BibTeX, or document formatting, (5) reviewing LaTeX code quality. Ensures proper use of semantic environments (description vs itemize), csquotes (\enquote{} not ``...''), and cleveref (\cref{} not \S\ref{}).

4935

geospatial-analysis

benchflow-ai

Analyze geospatial data using geopandas with proper coordinate projections. Use when calculating distances between geographic features, performing spatial filtering, or working with plate boundaries and earthquake data.

287

pytorch

benchflow-ai

Building and training neural networks with PyTorch. Use when implementing deep learning models, training loops, data pipelines, model optimization with torch.compile, distributed training, or deploying PyTorch models.

305

search-flights

benchflow-ai

Search flights by origin, destination, and departure date using the bundled flights dataset. Use this skill when proposing flight options or checking whether a route/date combination exists.

214

d3js-visualization

benchflow-ai

Build deterministic, verifiable data visualizations with D3.js (v6). Generate standalone HTML/SVG (and optional PNG) from local data files without external network dependencies. Use when tasks require charts, plots, axes/scales, legends, tooltips, or data-driven SVG output.

174

deep-learning

benchflow-ai

PyTorch, TensorFlow, neural networks, CNNs, transformers, and deep learning for production

83

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.