fireflies-performance-tuning
Optimize Fireflies.ai API performance with caching, batching, and connection pooling. Use when experiencing slow API responses, implementing caching strategies, or optimizing request throughput for Fireflies.ai integrations. Trigger with phrases like "fireflies performance", "optimize fireflies", "fireflies latency", "fireflies caching", "fireflies slow", "fireflies batch".
Install
mkdir -p .claude/skills/fireflies-performance-tuning && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7429" && unzip -o skill.zip -d .claude/skills/fireflies-performance-tuning && rm skill.zipInstalls to .claude/skills/fireflies-performance-tuning
About this skill
Fireflies.ai Performance Tuning
Overview
Optimize Fireflies.ai GraphQL API performance. The biggest wins: request only needed fields (transcripts with sentences can be very large), cache immutable transcripts, and batch operations within rate limits.
Prerequisites
FIREFLIES_API_KEYconfigured- Understanding of your access pattern (list vs detail, frequency)
- Optional: Redis or LRU cache library
Instructions
Step 1: Field Selection -- The Biggest Win
Transcript responses with sentences can be enormous. Always request the minimum fields needed.
// BAD: Fetching everything when you only need titles
const HEAVY = `{ transcripts(limit: 50) {
id title date duration sentences { text speaker_name start_time end_time }
summary { overview action_items keywords outline bullet_gist }
analytics { speakers { name duration word_count } }
} }`;
// GOOD: Light query for listing
const LIGHT = `{ transcripts(limit: 50) {
id title date duration organizer_email
} }`;
// GOOD: Full query only when drilling into a specific transcript
const DETAIL = `query($id: String!) { transcript(id: $id) {
id title
sentences { speaker_name text start_time end_time }
summary { overview action_items keywords }
} }`;
Step 2: Cache Transcripts (They Are Immutable)
Once a transcript is processed, its content never changes. Cache aggressively.
import { LRUCache } from "lru-cache";
const transcriptCache = new LRUCache<string, any>({
max: 500,
ttl: 1000 * 60 * 60, // 1 hour -- transcripts are immutable
});
async function getCachedTranscript(id: string) {
const cached = transcriptCache.get(id);
if (cached) return cached;
const data = await firefliesQuery(`
query($id: String!) {
transcript(id: $id) {
id title date duration
speakers { name }
sentences { speaker_name text start_time end_time }
summary { overview action_items keywords }
}
}
`, { id });
transcriptCache.set(id, data.transcript);
return data.transcript;
}
Step 3: Redis Cache for Multi-Instance Deployments
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL!);
const CACHE_TTL = 3600; // 1 hour in seconds
async function getTranscriptCached(id: string) {
const cacheKey = `fireflies:transcript:${id}`;
// Check cache
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// Fetch from API
const data = await firefliesQuery(`
query($id: String!) {
transcript(id: $id) {
id title date duration
sentences { speaker_name text start_time end_time }
summary { overview action_items keywords }
}
}
`, { id });
// Cache the result
await redis.set(cacheKey, JSON.stringify(data.transcript), "EX", CACHE_TTL);
return data.transcript;
}
Step 4: Batch Processing with Rate Limit Awareness
import PQueue from "p-queue";
// Business plan: 60 req/min. Safe rate: 1 req/sec with headroom.
const queue = new PQueue({
concurrency: 1,
interval: 1100,
intervalCap: 1,
});
async function batchFetchTranscripts(ids: string[]) {
console.log(`Fetching ${ids.length} transcripts (rate-limited)...`);
const results = await Promise.all(
ids.map(id => queue.add(() => getCachedTranscript(id)))
);
const cacheHits = ids.filter(id => transcriptCache.has(id)).length;
console.log(`Done. Cache hits: ${cacheHits}/${ids.length}`);
return results;
}
Step 5: Warm Cache on Webhook Events
// When a transcript completes, pre-cache it immediately
async function onWebhookEvent(event: { meetingId: string; eventType: string }) {
if (event.eventType === "Transcription completed") {
// Pre-warm the cache so future reads are instant
await getCachedTranscript(event.meetingId);
console.log(`Pre-cached transcript: ${event.meetingId}`);
}
}
Step 6: Pagination for Large Result Sets
async function getAllTranscripts(batchSize = 50) {
const allTranscripts: any[] = [];
let hasMore = true;
let offset = 0;
while (hasMore) {
const data = await firefliesQuery(`
query($limit: Int, $skip: Int) {
transcripts(limit: $limit, skip: $skip) {
id title date duration
}
}
`, { limit: batchSize, skip: offset });
allTranscripts.push(...data.transcripts);
if (data.transcripts.length < batchSize) {
hasMore = false;
} else {
offset += batchSize;
// Rate limit: wait between pages
await new Promise(r => setTimeout(r, 1100));
}
}
return allTranscripts;
}
Performance Benchmarks
| Optimization | Before | After | Improvement |
|---|---|---|---|
| Field selection (list) | ~2s (with sentences) | ~200ms (metadata only) | 10x |
| LRU cache (detail view) | ~500ms (API call) | <1ms (cache hit) | 500x |
| Batch with queue | Rate limited/errors | Smooth throughput | Reliable |
| Webhook pre-cache | Cold fetch on user visit | Instant from cache | UX improvement |
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Slow list queries | Requesting sentences in list | Use light query without sentences |
| Rate limit 429 | Burst requests | Use PQueue with 1.1s interval |
| Large response OOM | Transcript with 2+ hour meeting | Stream/paginate sentences |
| Stale cache | (Not a real issue -- transcripts are immutable) | N/A |
Output
- Field-optimized GraphQL queries (light list, full detail)
- LRU and Redis caching for immutable transcripts
- Rate-limited batch processor
- Webhook-driven cache warming
Resources
Next Steps
For cost optimization, see fireflies-cost-tuning.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversOptimize Facebook ad campaigns with AI-driven insights, creative analysis, and campaign control in Meta Ads Manager for
Fast, local-first web content extraction for LLMs. Scrape, crawl, extract structured data — all from Rust. CLI, REST API
Chinese Trends Hub gives you real-time trending topics from major Chinese platforms like Weibo, Zhihu, Douyin, and more,
Use Google Lighthouse to check web page performance and optimize website speed. Try our landing page optimizer for bette
Process Excel files efficiently: read sheet names, extract data, and cache workbooks for large files using tools like pd
GitHub Repos Manager integrates with GitHub's REST API to streamline repo management, issues, pull requests, file ops, s
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.