deepgram-ci-integration
Configure Deepgram CI/CD integration for automated testing and deployment. Use when setting up continuous integration pipelines, automated testing, or deployment workflows for Deepgram integrations. Trigger with phrases like "deepgram CI", "deepgram CD", "deepgram pipeline", "deepgram github actions", "deepgram automated testing".
Install
mkdir -p .claude/skills/deepgram-ci-integration && curl -L -o skill.zip "https://mcp.directory/api/skills/download/9297" && unzip -o skill.zip -d .claude/skills/deepgram-ci-integration && rm skill.zipInstalls to .claude/skills/deepgram-ci-integration
About this skill
Deepgram CI Integration
Overview
Set up CI/CD pipelines for Deepgram integrations with GitHub Actions. Includes unit tests with mocked SDK, integration tests against the real API, smoke tests, automated key rotation, and deployment gates.
Prerequisites
- GitHub repository with Actions enabled
DEEPGRAM_API_KEYstored as repository secret@deepgram/sdkandvitestinstalled- Test fixtures committed (or downloaded in CI)
Instructions
Step 1: GitHub Actions Workflow
# .github/workflows/deepgram-ci.yml
name: Deepgram CI
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
NODE_VERSION: '20'
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: npm
- run: npm ci
- run: npm run lint
- run: npm run typecheck
- run: npm test -- --reporter=verbose
# Unit tests use mocked SDK — no API key needed
integration-tests:
runs-on: ubuntu-latest
needs: unit-tests
if: github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: npm
- run: npm ci
- run: npm run test:integration
env:
DEEPGRAM_API_KEY: ${{ secrets.DEEPGRAM_API_KEY }}
timeout-minutes: 5
smoke-test:
runs-on: ubuntu-latest
needs: integration-tests
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: npm
- run: npm ci && npm run build
- name: Smoke test
run: npx tsx scripts/smoke-test.ts
env:
DEEPGRAM_API_KEY: ${{ secrets.DEEPGRAM_API_KEY }}
timeout-minutes: 2
Step 2: Integration Test Suite
// tests/integration/deepgram.test.ts
import { describe, it, expect, beforeAll } from 'vitest';
import { createClient, DeepgramClient } from '@deepgram/sdk';
const SAMPLE_URL = 'https://static.deepgram.com/examples/Bueller-Life-moves-702702706.wav';
describe('Deepgram Integration', () => {
let client: DeepgramClient;
beforeAll(() => {
const key = process.env.DEEPGRAM_API_KEY;
if (!key) throw new Error('DEEPGRAM_API_KEY required for integration tests');
client = createClient(key);
});
it('authenticates successfully', async () => {
const { result, error } = await client.manage.getProjects();
expect(error).toBeNull();
expect(result.projects.length).toBeGreaterThan(0);
});
it('transcribes pre-recorded audio with Nova-3', async () => {
const { result, error } = await client.listen.prerecorded.transcribeUrl(
{ url: SAMPLE_URL },
{ model: 'nova-3', smart_format: true }
);
expect(error).toBeNull();
const alt = result.results.channels[0].alternatives[0];
expect(alt.transcript).toContain('Life');
expect(alt.confidence).toBeGreaterThan(0.85);
}, 30000);
it('returns word-level timing', async () => {
const { result } = await client.listen.prerecorded.transcribeUrl(
{ url: SAMPLE_URL },
{ model: 'nova-3' }
);
const words = result.results.channels[0].alternatives[0].words;
expect(words).toBeDefined();
expect(words!.length).toBeGreaterThan(0);
expect(words![0]).toHaveProperty('start');
expect(words![0]).toHaveProperty('end');
expect(words![0]).toHaveProperty('confidence');
}, 30000);
it('speaker diarization identifies speakers', async () => {
const { result } = await client.listen.prerecorded.transcribeUrl(
{ url: SAMPLE_URL },
{ model: 'nova-3', diarize: true }
);
const words = result.results.channels[0].alternatives[0].words;
expect(words?.some((w: any) => w.speaker !== undefined)).toBe(true);
}, 30000);
it('TTS generates audio stream', async () => {
const response = await client.speak.request(
{ text: 'CI test.' },
{ model: 'aura-2-thalia-en', encoding: 'linear16', container: 'wav' }
);
const stream = await response.getStream();
expect(stream).toBeTruthy();
}, 15000);
});
Step 3: Smoke Test Script
// scripts/smoke-test.ts
import { createClient } from '@deepgram/sdk';
const SAMPLE_URL = 'https://static.deepgram.com/examples/Bueller-Life-moves-702702706.wav';
async function smokeTest() {
console.log('Deepgram Smoke Test');
console.log('='.repeat(40));
const client = createClient(process.env.DEEPGRAM_API_KEY!);
let passed = 0;
let failed = 0;
// Test 1: Authentication
try {
const { error } = await client.manage.getProjects();
if (error) throw error;
console.log('[PASS] Authentication');
passed++;
} catch (err: any) {
console.error(`[FAIL] Authentication: ${err.message}`);
failed++;
}
// Test 2: Pre-recorded transcription
try {
const { result, error } = await client.listen.prerecorded.transcribeUrl(
{ url: SAMPLE_URL },
{ model: 'nova-3', smart_format: true }
);
if (error) throw error;
if (!result.results.channels[0].alternatives[0].transcript) {
throw new Error('Empty transcript');
}
console.log('[PASS] Pre-recorded transcription');
passed++;
} catch (err: any) {
console.error(`[FAIL] Pre-recorded transcription: ${err.message}`);
failed++;
}
// Test 3: TTS
try {
const response = await client.speak.request(
{ text: 'Smoke test.' },
{ model: 'aura-2-thalia-en' }
);
const stream = await response.getStream();
if (!stream) throw new Error('No audio stream');
console.log('[PASS] Text-to-speech');
passed++;
} catch (err: any) {
console.error(`[FAIL] Text-to-speech: ${err.message}`);
failed++;
}
console.log(`\nResults: ${passed} passed, ${failed} failed`);
process.exit(failed > 0 ? 1 : 0);
}
smokeTest();
Step 4: Package.json Scripts
{
"scripts": {
"test": "vitest run",
"test:watch": "vitest --watch",
"test:integration": "vitest run tests/integration/",
"test:smoke": "tsx scripts/smoke-test.ts",
"lint": "eslint src/ tests/",
"typecheck": "tsc --noEmit"
}
}
Step 5: Vitest Configuration
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
environment: 'node',
include: ['tests/**/*.test.ts'],
exclude: ['tests/integration/**'], // Integration tests run separately
coverage: {
include: ['src/**'],
reporter: ['text', 'lcov'],
},
},
});
Step 6: Automated Key Rotation (Scheduled)
# .github/workflows/rotate-deepgram-key.yml
name: Rotate Deepgram API Key
on:
schedule:
- cron: '0 0 1 */3 *' # Quarterly (1st of every 3rd month)
workflow_dispatch:
jobs:
rotate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- name: Rotate key
run: |
NEW_KEY=$(npx tsx scripts/rotate-key.ts)
gh secret set DEEPGRAM_API_KEY --body "$NEW_KEY"
echo "Key rotated successfully"
env:
DEEPGRAM_ADMIN_KEY: ${{ secrets.DEEPGRAM_ADMIN_KEY }}
DEEPGRAM_PROJECT_ID: ${{ secrets.DEEPGRAM_PROJECT_ID }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Output
- GitHub Actions workflow (unit -> integration -> smoke)
- Integration test suite covering STT, diarization, TTS
- Smoke test script with pass/fail exit codes
- Vitest configuration with integration test separation
- Quarterly key rotation workflow
Error Handling
| Issue | Cause | Resolution |
|---|---|---|
| Integration tests 401 | Secret not set or expired | Rotate DEEPGRAM_API_KEY secret |
| Smoke test timeout | API latency | Increase timeout-minutes |
| Tests pass locally, fail in CI | Missing env var | Check secrets are set in repo settings |
| Fork PRs can't access secrets | GitHub security | Skip integration tests on fork PRs |
Resources
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversPica is automated workflow software for business process automation, integrating actions across services via a unified i
Automate repository management, issue tracking, and merge requests with GitLab API integration for streamlined developme
Xero enables seamless financial data integration and accounting operations via xero software and OAuth2 for automated wo
Streamline payments with PayPal Agent Toolkit — AI-driven agent integration for secure, automated PayPal services and fa
3D Printer Manager enables remote control, file handling, and advanced STL editing with seamless integration for better
Boost AI assistants with a unified DataForSEO MCP server interface. This project offers modular tools—SERP, Keywords, Ba
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.