deepgram-migration-deep-dive
Deep dive into complex Deepgram migrations and provider transitions. Use when migrating from other transcription providers, planning large-scale migrations, or implementing phased rollout strategies. Trigger with phrases like "deepgram migration", "switch to deepgram", "migrate transcription", "deepgram from AWS", "deepgram from Google".
Install
mkdir -p .claude/skills/deepgram-migration-deep-dive && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7738" && unzip -o skill.zip -d .claude/skills/deepgram-migration-deep-dive && rm skill.zipInstalls to .claude/skills/deepgram-migration-deep-dive
About this skill
Deepgram Migration Deep Dive
Current State
!npm list @deepgram/sdk 2>/dev/null | grep deepgram || echo 'Not installed'
!npm list @aws-sdk/client-transcribe 2>/dev/null | grep transcribe || echo 'AWS Transcribe SDK not found'
!pip show google-cloud-speech 2>/dev/null | grep Version || echo 'Google STT not found'
Overview
Migrate to Deepgram from AWS Transcribe, Google Cloud Speech-to-Text, Azure Cognitive Services, or OpenAI Whisper. Uses an adapter pattern with a unified interface, parallel running for quality validation, percentage-based traffic shifting, and automated rollback.
Feature Mapping
AWS Transcribe -> Deepgram
| AWS Transcribe | Deepgram | Notes |
|---|---|---|
LanguageCode: 'en-US' | language: 'en' | ISO 639-1 (2-letter) |
ShowSpeakerLabels: true | diarize: true | Same feature, different param |
VocabularyName: 'custom' | keywords: ['term:1.5'] | Inline boosting, no pre-upload |
ContentRedactionType: 'PII' | redact: ['pci', 'ssn'] | Granular PII categories |
OutputBucketName | callback: 'https://...' | Callback URL, not S3 |
| Job polling model | Sync response or callback | No polling needed |
Google Cloud STT -> Deepgram
| Google STT | Deepgram | Notes |
|---|---|---|
RecognitionConfig.encoding | Auto-detected | Deepgram auto-detects format |
RecognitionConfig.sampleRateHertz | sample_rate (live only) | REST auto-detects |
RecognitionConfig.model: 'latest_long' | model: 'nova-3' | Direct mapping |
SpeakerDiarizationConfig | diarize: true | Simpler configuration |
StreamingRecognize | listen.live() | WebSocket vs gRPC |
OpenAI Whisper -> Deepgram
| Whisper | Deepgram | Notes |
|---|---|---|
| Local GPU processing | API call | No GPU needed |
whisper.transcribe(audio) | listen.prerecorded.transcribeFile() | Similar interface |
model='large-v3' | model: 'nova-3' | 10-100x faster |
language='en' | language: 'en' | Same format |
| No diarization | diarize: true | Deepgram advantage |
| No streaming | listen.live() | Deepgram advantage |
Instructions
Step 1: Adapter Pattern
interface TranscriptionResult {
transcript: string;
confidence: number;
words: Array<{ word: string; start: number; end: number; speaker?: number }>;
duration: number;
provider: string;
}
interface TranscriptionAdapter {
transcribeUrl(url: string, options: any): Promise<TranscriptionResult>;
transcribeFile(path: string, options: any): Promise<TranscriptionResult>;
name: string;
}
Step 2: Deepgram Adapter
import { createClient } from '@deepgram/sdk';
import { readFileSync } from 'fs';
class DeepgramAdapter implements TranscriptionAdapter {
name = 'deepgram';
private client: ReturnType<typeof createClient>;
constructor(apiKey: string) {
this.client = createClient(apiKey);
}
async transcribeUrl(url: string, options: any = {}): Promise<TranscriptionResult> {
const { result, error } = await this.client.listen.prerecorded.transcribeUrl(
{ url },
{
model: options.model ?? 'nova-3',
smart_format: true,
diarize: options.diarize ?? false,
language: options.language ?? 'en',
keywords: options.keywords,
redact: options.redact,
}
);
if (error) throw new Error(`Deepgram: ${error.message}`);
return this.normalize(result);
}
async transcribeFile(path: string, options: any = {}): Promise<TranscriptionResult> {
const audio = readFileSync(path);
const { result, error } = await this.client.listen.prerecorded.transcribeFile(
audio,
{
model: options.model ?? 'nova-3',
smart_format: true,
diarize: options.diarize ?? false,
}
);
if (error) throw new Error(`Deepgram: ${error.message}`);
return this.normalize(result);
}
private normalize(result: any): TranscriptionResult {
const alt = result.results.channels[0].alternatives[0];
return {
transcript: alt.transcript,
confidence: alt.confidence,
words: (alt.words ?? []).map((w: any) => ({
word: w.punctuated_word ?? w.word,
start: w.start,
end: w.end,
speaker: w.speaker,
})),
duration: result.metadata.duration,
provider: 'deepgram',
};
}
}
Step 3: AWS Transcribe Adapter (Legacy)
// Legacy adapter — wraps existing AWS Transcribe code for parallel running
import { TranscribeClient, StartTranscriptionJobCommand, GetTranscriptionJobCommand }
from '@aws-sdk/client-transcribe';
class AWSTranscribeAdapter implements TranscriptionAdapter {
name = 'aws-transcribe';
private client: TranscribeClient;
constructor() {
this.client = new TranscribeClient({});
}
async transcribeUrl(url: string, options: any = {}): Promise<TranscriptionResult> {
const jobName = `migration-${Date.now()}`;
await this.client.send(new StartTranscriptionJobCommand({
TranscriptionJobName: jobName,
LanguageCode: options.language ?? 'en-US',
Media: { MediaFileUri: url },
Settings: {
ShowSpeakerLabels: options.diarize ?? false,
MaxSpeakerLabels: options.diarize ? 10 : undefined,
},
}));
// Poll for completion (AWS is async-only)
let job;
do {
await new Promise(r => setTimeout(r, 5000));
const result = await this.client.send(new GetTranscriptionJobCommand({
TranscriptionJobName: jobName,
}));
job = result.TranscriptionJob;
} while (job?.TranscriptionJobStatus === 'IN_PROGRESS');
if (job?.TranscriptionJobStatus !== 'COMPLETED') {
throw new Error(`AWS Transcribe failed: ${job?.FailureReason}`);
}
// Fetch and normalize result
const response = await fetch(job.Transcript!.TranscriptFileUri!);
const data = await response.json();
return {
transcript: data.results.transcripts[0].transcript,
confidence: 0, // AWS doesn't provide overall confidence
words: data.results.items
.filter((i: any) => i.type === 'pronunciation')
.map((i: any) => ({
word: i.alternatives[0].content,
start: parseFloat(i.start_time),
end: parseFloat(i.end_time),
speaker: i.speaker_label ? parseInt(i.speaker_label.replace('spk_', '')) : undefined,
})),
duration: 0,
provider: 'aws-transcribe',
};
}
async transcribeFile(path: string): Promise<TranscriptionResult> {
throw new Error('Upload to S3 first, then use transcribeUrl');
}
}
Step 4: Migration Router with Traffic Shifting
class MigrationRouter {
private adapters: Map<string, TranscriptionAdapter> = new Map();
private deepgramPercent: number;
constructor(deepgramPercent = 0) {
this.deepgramPercent = deepgramPercent;
}
register(adapter: TranscriptionAdapter) {
this.adapters.set(adapter.name, adapter);
}
setDeepgramPercent(percent: number) {
this.deepgramPercent = Math.max(0, Math.min(100, percent));
console.log(`Traffic split: ${this.deepgramPercent}% Deepgram, ${100 - this.deepgramPercent}% legacy`);
}
async transcribe(url: string, options: any = {}): Promise<TranscriptionResult> {
const useDeepgram = Math.random() * 100 < this.deepgramPercent;
const primary = useDeepgram ? 'deepgram' : this.getLegacyName();
const adapter = this.adapters.get(primary);
if (!adapter) throw new Error(`Adapter not found: ${primary}`);
const start = Date.now();
const result = await adapter.transcribeUrl(url, options);
const elapsed = Date.now() - start;
console.log(`[${primary}] ${elapsed}ms, confidence: ${result.confidence.toFixed(3)}`);
return result;
}
private getLegacyName(): string {
for (const [name] of this.adapters) {
if (name !== 'deepgram') return name;
}
throw new Error('No legacy adapter registered');
}
}
// Migration rollout:
const router = new MigrationRouter(0);
router.register(new AWSTranscribeAdapter());
router.register(new DeepgramAdapter(process.env.DEEPGRAM_API_KEY!));
// Week 1: 5% to Deepgram
router.setDeepgramPercent(5);
// Week 2: 25%
router.setDeepgramPercent(25);
// Week 3: 50%
router.setDeepgramPercent(50);
// Week 4: 100% — migration complete
router.setDeepgramPercent(100);
Step 5: Parallel Running and Quality Validation
async function validateMigration(
testAudioUrls: string[],
legacyAdapter: TranscriptionAdapter,
deepgramAdapter: TranscriptionAdapter,
minSimilarity = 0.85
) {
console.log(`Validating ${testAudioUrls.length} files (min similarity: ${minSimilarity})`);
const results: Array<{
url: string;
similarity: number;
legacyConfidence: number;
deepgramConfidence: number;
legacyTime: number;
deepgramTime: number;
pass: boolean;
}> = [];
for (const url of testAudioUrls) {
const legacyStart = Date.now();
const legacy = await legacyAdapter.transcribeUrl(url);
const legacyTime = Date.now() - legacyStart;
const dgStart = Date.now();
const dg = await deepgramAdapter.transcribeUrl(url);
const dgTime = Date.now() - dgStart;
// Jaccard similarity
const words1 = new Set(legacy.transcript.toLowerCase().split(/\s+/));
const words2 = new Set(dg.transcript.toLowerCase().split(/\s+/));
const intersection = new Set([...words1].filter(w => words2.has(w)));
const union = new Set([...words1, ...words2]);
const similarity = intersection.size / union.size;
results.push({
url: url.substring(url.lastIndexOf('/') + 1),
similarity,
legacyConfidence: legacy.confidence,
deepgramConfidence: dg.confidence,
legacyTime,
deepgramTime,
pass: similarity >= minSimilarity,
});
}
// Report
const passCount = results.filter(r => r.pass).length;
console.log(`\n=== Validation Results ===`);
for (
---
*Content truncated.*
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversExtend your developer tools with GitHub MCP Server for advanced automation, supporting GitHub Student and student packag
Enhance software testing with Playwright MCP: Fast, reliable browser automation, an innovative alternative to Selenium s
Create modern React UI components instantly with Magic AI Agent. Integrates with top IDEs for fast, stunning design and
Enhance productivity with AI-driven Notion automation. Leverage the Notion API for secure, automated workspace managemen
Effortlessly create 25+ chart types with MCP Server Chart. Visualize complex datasets using TypeScript and AntV for powe
Boost your productivity by managing Azure DevOps projects, pipelines, and repos in VS Code. Streamline dev workflows wit
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.