deepgram-migration-deep-dive

0
0
Source

Deep dive into complex Deepgram migrations and provider transitions. Use when migrating from other transcription providers, planning large-scale migrations, or implementing phased rollout strategies. Trigger with phrases like "deepgram migration", "switch to deepgram", "migrate transcription", "deepgram from AWS", "deepgram from Google".

Install

mkdir -p .claude/skills/deepgram-migration-deep-dive && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7738" && unzip -o skill.zip -d .claude/skills/deepgram-migration-deep-dive && rm skill.zip

Installs to .claude/skills/deepgram-migration-deep-dive

About this skill

Deepgram Migration Deep Dive

Current State

!npm list @deepgram/sdk 2>/dev/null | grep deepgram || echo 'Not installed' !npm list @aws-sdk/client-transcribe 2>/dev/null | grep transcribe || echo 'AWS Transcribe SDK not found' !pip show google-cloud-speech 2>/dev/null | grep Version || echo 'Google STT not found'

Overview

Migrate to Deepgram from AWS Transcribe, Google Cloud Speech-to-Text, Azure Cognitive Services, or OpenAI Whisper. Uses an adapter pattern with a unified interface, parallel running for quality validation, percentage-based traffic shifting, and automated rollback.

Feature Mapping

AWS Transcribe -> Deepgram

AWS TranscribeDeepgramNotes
LanguageCode: 'en-US'language: 'en'ISO 639-1 (2-letter)
ShowSpeakerLabels: truediarize: trueSame feature, different param
VocabularyName: 'custom'keywords: ['term:1.5']Inline boosting, no pre-upload
ContentRedactionType: 'PII'redact: ['pci', 'ssn']Granular PII categories
OutputBucketNamecallback: 'https://...'Callback URL, not S3
Job polling modelSync response or callbackNo polling needed

Google Cloud STT -> Deepgram

Google STTDeepgramNotes
RecognitionConfig.encodingAuto-detectedDeepgram auto-detects format
RecognitionConfig.sampleRateHertzsample_rate (live only)REST auto-detects
RecognitionConfig.model: 'latest_long'model: 'nova-3'Direct mapping
SpeakerDiarizationConfigdiarize: trueSimpler configuration
StreamingRecognizelisten.live()WebSocket vs gRPC

OpenAI Whisper -> Deepgram

WhisperDeepgramNotes
Local GPU processingAPI callNo GPU needed
whisper.transcribe(audio)listen.prerecorded.transcribeFile()Similar interface
model='large-v3'model: 'nova-3'10-100x faster
language='en'language: 'en'Same format
No diarizationdiarize: trueDeepgram advantage
No streaminglisten.live()Deepgram advantage

Instructions

Step 1: Adapter Pattern

interface TranscriptionResult {
  transcript: string;
  confidence: number;
  words: Array<{ word: string; start: number; end: number; speaker?: number }>;
  duration: number;
  provider: string;
}

interface TranscriptionAdapter {
  transcribeUrl(url: string, options: any): Promise<TranscriptionResult>;
  transcribeFile(path: string, options: any): Promise<TranscriptionResult>;
  name: string;
}

Step 2: Deepgram Adapter

import { createClient } from '@deepgram/sdk';
import { readFileSync } from 'fs';

class DeepgramAdapter implements TranscriptionAdapter {
  name = 'deepgram';
  private client: ReturnType<typeof createClient>;

  constructor(apiKey: string) {
    this.client = createClient(apiKey);
  }

  async transcribeUrl(url: string, options: any = {}): Promise<TranscriptionResult> {
    const { result, error } = await this.client.listen.prerecorded.transcribeUrl(
      { url },
      {
        model: options.model ?? 'nova-3',
        smart_format: true,
        diarize: options.diarize ?? false,
        language: options.language ?? 'en',
        keywords: options.keywords,
        redact: options.redact,
      }
    );
    if (error) throw new Error(`Deepgram: ${error.message}`);
    return this.normalize(result);
  }

  async transcribeFile(path: string, options: any = {}): Promise<TranscriptionResult> {
    const audio = readFileSync(path);
    const { result, error } = await this.client.listen.prerecorded.transcribeFile(
      audio,
      {
        model: options.model ?? 'nova-3',
        smart_format: true,
        diarize: options.diarize ?? false,
      }
    );
    if (error) throw new Error(`Deepgram: ${error.message}`);
    return this.normalize(result);
  }

  private normalize(result: any): TranscriptionResult {
    const alt = result.results.channels[0].alternatives[0];
    return {
      transcript: alt.transcript,
      confidence: alt.confidence,
      words: (alt.words ?? []).map((w: any) => ({
        word: w.punctuated_word ?? w.word,
        start: w.start,
        end: w.end,
        speaker: w.speaker,
      })),
      duration: result.metadata.duration,
      provider: 'deepgram',
    };
  }
}

Step 3: AWS Transcribe Adapter (Legacy)

// Legacy adapter — wraps existing AWS Transcribe code for parallel running
import { TranscribeClient, StartTranscriptionJobCommand, GetTranscriptionJobCommand }
  from '@aws-sdk/client-transcribe';

class AWSTranscribeAdapter implements TranscriptionAdapter {
  name = 'aws-transcribe';
  private client: TranscribeClient;

  constructor() {
    this.client = new TranscribeClient({});
  }

  async transcribeUrl(url: string, options: any = {}): Promise<TranscriptionResult> {
    const jobName = `migration-${Date.now()}`;

    await this.client.send(new StartTranscriptionJobCommand({
      TranscriptionJobName: jobName,
      LanguageCode: options.language ?? 'en-US',
      Media: { MediaFileUri: url },
      Settings: {
        ShowSpeakerLabels: options.diarize ?? false,
        MaxSpeakerLabels: options.diarize ? 10 : undefined,
      },
    }));

    // Poll for completion (AWS is async-only)
    let job;
    do {
      await new Promise(r => setTimeout(r, 5000));
      const result = await this.client.send(new GetTranscriptionJobCommand({
        TranscriptionJobName: jobName,
      }));
      job = result.TranscriptionJob;
    } while (job?.TranscriptionJobStatus === 'IN_PROGRESS');

    if (job?.TranscriptionJobStatus !== 'COMPLETED') {
      throw new Error(`AWS Transcribe failed: ${job?.FailureReason}`);
    }

    // Fetch and normalize result
    const response = await fetch(job.Transcript!.TranscriptFileUri!);
    const data = await response.json();

    return {
      transcript: data.results.transcripts[0].transcript,
      confidence: 0, // AWS doesn't provide overall confidence
      words: data.results.items
        .filter((i: any) => i.type === 'pronunciation')
        .map((i: any) => ({
          word: i.alternatives[0].content,
          start: parseFloat(i.start_time),
          end: parseFloat(i.end_time),
          speaker: i.speaker_label ? parseInt(i.speaker_label.replace('spk_', '')) : undefined,
        })),
      duration: 0,
      provider: 'aws-transcribe',
    };
  }

  async transcribeFile(path: string): Promise<TranscriptionResult> {
    throw new Error('Upload to S3 first, then use transcribeUrl');
  }
}

Step 4: Migration Router with Traffic Shifting

class MigrationRouter {
  private adapters: Map<string, TranscriptionAdapter> = new Map();
  private deepgramPercent: number;

  constructor(deepgramPercent = 0) {
    this.deepgramPercent = deepgramPercent;
  }

  register(adapter: TranscriptionAdapter) {
    this.adapters.set(adapter.name, adapter);
  }

  setDeepgramPercent(percent: number) {
    this.deepgramPercent = Math.max(0, Math.min(100, percent));
    console.log(`Traffic split: ${this.deepgramPercent}% Deepgram, ${100 - this.deepgramPercent}% legacy`);
  }

  async transcribe(url: string, options: any = {}): Promise<TranscriptionResult> {
    const useDeepgram = Math.random() * 100 < this.deepgramPercent;
    const primary = useDeepgram ? 'deepgram' : this.getLegacyName();
    const adapter = this.adapters.get(primary);

    if (!adapter) throw new Error(`Adapter not found: ${primary}`);

    const start = Date.now();
    const result = await adapter.transcribeUrl(url, options);
    const elapsed = Date.now() - start;

    console.log(`[${primary}] ${elapsed}ms, confidence: ${result.confidence.toFixed(3)}`);
    return result;
  }

  private getLegacyName(): string {
    for (const [name] of this.adapters) {
      if (name !== 'deepgram') return name;
    }
    throw new Error('No legacy adapter registered');
  }
}

// Migration rollout:
const router = new MigrationRouter(0);
router.register(new AWSTranscribeAdapter());
router.register(new DeepgramAdapter(process.env.DEEPGRAM_API_KEY!));

// Week 1: 5% to Deepgram
router.setDeepgramPercent(5);
// Week 2: 25%
router.setDeepgramPercent(25);
// Week 3: 50%
router.setDeepgramPercent(50);
// Week 4: 100% — migration complete
router.setDeepgramPercent(100);

Step 5: Parallel Running and Quality Validation

async function validateMigration(
  testAudioUrls: string[],
  legacyAdapter: TranscriptionAdapter,
  deepgramAdapter: TranscriptionAdapter,
  minSimilarity = 0.85
) {
  console.log(`Validating ${testAudioUrls.length} files (min similarity: ${minSimilarity})`);

  const results: Array<{
    url: string;
    similarity: number;
    legacyConfidence: number;
    deepgramConfidence: number;
    legacyTime: number;
    deepgramTime: number;
    pass: boolean;
  }> = [];

  for (const url of testAudioUrls) {
    const legacyStart = Date.now();
    const legacy = await legacyAdapter.transcribeUrl(url);
    const legacyTime = Date.now() - legacyStart;

    const dgStart = Date.now();
    const dg = await deepgramAdapter.transcribeUrl(url);
    const dgTime = Date.now() - dgStart;

    // Jaccard similarity
    const words1 = new Set(legacy.transcript.toLowerCase().split(/\s+/));
    const words2 = new Set(dg.transcript.toLowerCase().split(/\s+/));
    const intersection = new Set([...words1].filter(w => words2.has(w)));
    const union = new Set([...words1, ...words2]);
    const similarity = intersection.size / union.size;

    results.push({
      url: url.substring(url.lastIndexOf('/') + 1),
      similarity,
      legacyConfidence: legacy.confidence,
      deepgramConfidence: dg.confidence,
      legacyTime,
      deepgramTime,
      pass: similarity >= minSimilarity,
    });
  }

  // Report
  const passCount = results.filter(r => r.pass).length;
  console.log(`\n=== Validation Results ===`);
  for (

---

*Content truncated.*

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

7824

automating-mobile-app-testing

jeremylongshore

This skill enables automated testing of mobile applications on iOS and Android platforms using frameworks like Appium, Detox, XCUITest, and Espresso. It generates end-to-end tests, sets up page object models, and handles platform-specific elements. Use this skill when the user requests mobile app testing, test automation for iOS or Android, or needs assistance with setting up device farms and simulators. The skill is triggered by terms like "mobile testing", "appium", "detox", "xcuitest", "espresso", "android test", "ios test".

13615

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

3114

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

4311

performing-security-audits

jeremylongshore

This skill allows Claude to conduct comprehensive security audits of code, infrastructure, and configurations. It leverages various tools within the security-pro-pack plugin, including vulnerability scanning, compliance checking, cryptography review, and infrastructure security analysis. Use this skill when a user requests a "security audit," "vulnerability assessment," "compliance review," or any task involving identifying and mitigating security risks. It helps to ensure code and systems adhere to security best practices and compliance standards.

109

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

1128

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

9521,094

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

846846

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

571699

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

548492

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

673466

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

514280

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.