deepgram-sdk-patterns

0
0
Source

Apply production-ready Deepgram SDK patterns for TypeScript and Python. Use when implementing Deepgram integrations, refactoring SDK usage, or establishing team coding standards for Deepgram. Trigger with phrases like "deepgram SDK patterns", "deepgram best practices", "deepgram code patterns", "idiomatic deepgram", "deepgram typescript".

Install

mkdir -p .claude/skills/deepgram-sdk-patterns && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5334" && unzip -o skill.zip -d .claude/skills/deepgram-sdk-patterns && rm skill.zip

Installs to .claude/skills/deepgram-sdk-patterns

About this skill

Deepgram SDK Patterns

Overview

Production patterns for @deepgram/sdk (TypeScript) and deepgram-sdk (Python). Covers singleton client, typed wrappers, text-to-speech with Aura, audio intelligence pipeline, error handling, and SDK v5 migration path.

Prerequisites

  • npm install @deepgram/sdk or pip install deepgram-sdk
  • DEEPGRAM_API_KEY environment variable configured

Instructions

Step 1: Singleton Client (TypeScript)

import { createClient, DeepgramClient } from '@deepgram/sdk';

class DeepgramService {
  private static instance: DeepgramService;
  private client: DeepgramClient;

  private constructor() {
    const apiKey = process.env.DEEPGRAM_API_KEY;
    if (!apiKey) throw new Error('DEEPGRAM_API_KEY is required');
    this.client = createClient(apiKey);
  }

  static getInstance(): DeepgramService {
    if (!this.instance) this.instance = new DeepgramService();
    return this.instance;
  }

  getClient(): DeepgramClient { return this.client; }
}

export const deepgram = DeepgramService.getInstance().getClient();

Step 2: Text-to-Speech with Aura

import { createClient } from '@deepgram/sdk';
import { writeFileSync } from 'fs';

const deepgram = createClient(process.env.DEEPGRAM_API_KEY!);

async function textToSpeech(text: string, outputPath: string) {
  const response = await deepgram.speak.request(
    { text },
    {
      model: 'aura-2-thalia-en',  // Female English voice
      encoding: 'linear16',
      container: 'wav',
      sample_rate: 24000,
    }
  );

  const stream = await response.getStream();
  if (!stream) throw new Error('No audio stream returned');

  // Collect stream into buffer
  const reader = stream.getReader();
  const chunks: Uint8Array[] = [];
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    chunks.push(value);
  }

  const buffer = Buffer.concat(chunks);
  writeFileSync(outputPath, buffer);
  console.log(`Audio saved: ${outputPath} (${buffer.length} bytes)`);
  return buffer;
}

// Aura-2 voice options:
// aura-2-thalia-en    — Female, warm
// aura-2-asteria-en   — Female, default
// aura-2-orion-en     — Male, deep
// aura-2-luna-en      — Female, soft
// aura-2-helios-en    — Male, authoritative
// aura-asteria-en     — Aura v1 fallback

Step 3: Audio Intelligence Pipeline

async function analyzeConversation(audioUrl: string) {
  const { result, error } = await deepgram.listen.prerecorded.transcribeUrl(
    { url: audioUrl },
    {
      model: 'nova-3',
      smart_format: true,
      diarize: true,
      utterances: true,
      // Audio Intelligence features
      summarize: 'v2',       // Generates a short summary
      detect_topics: true,   // Identifies key topics
      sentiment: true,       // Per-segment sentiment analysis
      intents: true,         // Identifies speaker intents
    }
  );
  if (error) throw error;

  return {
    transcript: result.results.channels[0].alternatives[0].transcript,
    summary: result.results.summary?.short,
    topics: result.results.topics?.segments?.map((s: any) => ({
      text: s.text,
      topics: s.topics.map((t: any) => t.topic),
    })),
    sentiments: result.results.sentiments?.segments?.map((s: any) => ({
      text: s.text,
      sentiment: s.sentiment,
      confidence: s.sentiment_score,
    })),
    intents: result.results.intents?.segments?.map((s: any) => ({
      text: s.text,
      intent: s.intents[0]?.intent,
      confidence: s.intents[0]?.confidence_score,
    })),
  };
}

Step 4: Python Production Patterns

from deepgram import DeepgramClient, PrerecordedOptions, LiveOptions, SpeakOptions
import os

class DeepgramService:
    _instance = None

    def __new__(cls):
        if cls._instance is None:
            cls._instance = super().__new__(cls)
            cls._instance.client = DeepgramClient(os.environ["DEEPGRAM_API_KEY"])
        return cls._instance

    def transcribe_url(self, url: str, **kwargs):
        options = PrerecordedOptions(
            model=kwargs.get("model", "nova-3"),
            smart_format=True,
            diarize=kwargs.get("diarize", False),
            summarize=kwargs.get("summarize", False),
        )
        source = {"url": url}
        return self.client.listen.rest.v("1").transcribe_url(source, options)

    def transcribe_file(self, path: str, **kwargs):
        with open(path, "rb") as f:
            source = {"buffer": f.read(), "mimetype": self._mimetype(path)}
        options = PrerecordedOptions(
            model=kwargs.get("model", "nova-3"),
            smart_format=True,
            diarize=kwargs.get("diarize", False),
        )
        return self.client.listen.rest.v("1").transcribe_file(source, options)

    def text_to_speech(self, text: str, output_path: str):
        options = SpeakOptions(model="aura-2-thalia-en", encoding="linear16")
        response = self.client.speak.rest.v("1").save(output_path, {"text": text}, options)
        return response

    @staticmethod
    def _mimetype(path: str) -> str:
        ext = path.rsplit(".", 1)[-1].lower()
        return {"wav": "audio/wav", "mp3": "audio/mpeg", "flac": "audio/flac",
                "ogg": "audio/ogg", "m4a": "audio/mp4"}.get(ext, "audio/wav")

Step 5: Typed Response Helpers

// Extract clean types from Deepgram responses
interface TranscriptWord {
  word: string;
  start: number;
  end: number;
  confidence: number;
  speaker?: number;
  punctuated_word?: string;
}

interface TranscriptResult {
  transcript: string;
  confidence: number;
  words: TranscriptWord[];
  duration: number;
  requestId: string;
}

function parseResult(result: any): TranscriptResult {
  const alt = result.results.channels[0].alternatives[0];
  return {
    transcript: alt.transcript,
    confidence: alt.confidence,
    words: alt.words ?? [],
    duration: result.metadata.duration,
    requestId: result.metadata.request_id,
  };
}

Step 6: SDK v5 Migration Notes

// v3/v4 (current stable):
import { createClient } from '@deepgram/sdk';
const dg = createClient(apiKey);
await dg.listen.prerecorded.transcribeUrl(source, options);
await dg.listen.live(options);
await dg.speak.request({ text }, options);

// v5 (auto-generated, Fern-based):
import { DeepgramClient } from '@deepgram/sdk';
const dg = new DeepgramClient({ apiKey });
await dg.listen.v1.media.transcribeUrl(source, options);
await dg.listen.v1.connect(options);  // async
await dg.speak.v1.audio.generate({ text }, options);

Output

  • Singleton client pattern with environment validation
  • Text-to-speech (Aura-2) with stream-to-file
  • Audio intelligence pipeline (summary, topics, sentiment, intents)
  • Python production service class
  • Typed response helpers
  • v5 migration reference

Error Handling

ErrorCauseSolution
401 UnauthorizedInvalid API keyCheck DEEPGRAM_API_KEY value
400 Unsupported formatBad audio codecConvert to WAV/MP3/FLAC
speak.request is not a functionSDK version mismatchCheck import, v5 uses speak.v1.audio.generate
Empty TTS responseEmpty text inputValidate text is non-empty before calling
summarize returns nullFeature not enabledPass summarize: 'v2' (string, not boolean)

Resources

Next Steps

Proceed to deepgram-data-handling for transcript storage and processing patterns.

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

6814

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

2412

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

379

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

978

performing-security-audits

jeremylongshore

This skill allows Claude to conduct comprehensive security audits of code, infrastructure, and configurations. It leverages various tools within the security-pro-pack plugin, including vulnerability scanning, compliance checking, cryptography review, and infrastructure security analysis. Use this skill when a user requests a "security audit," "vulnerability assessment," "compliance review," or any task involving identifying and mitigating security risks. It helps to ensure code and systems adhere to security best practices and compliance standards.

86

analyzing-logs

jeremylongshore

Analyze application logs to detect performance issues, identify error patterns, and improve stability by extracting key insights.

965

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318399

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

340397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

452339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.