langchain-performance-tuning
Optimize LangChain application performance and latency. Use when reducing response times, optimizing throughput, or improving the efficiency of LangChain pipelines. Trigger with phrases like "langchain performance", "langchain optimization", "langchain latency", "langchain slow", "speed up langchain".
Install
mkdir -p .claude/skills/langchain-performance-tuning && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8043" && unzip -o skill.zip -d .claude/skills/langchain-performance-tuning && rm skill.zipInstalls to .claude/skills/langchain-performance-tuning
About this skill
LangChain Performance Tuning
Overview
Optimize LangChain apps for production: measure baseline latency, implement caching, batch with concurrency control, stream for perceived speed, optimize prompts for fewer tokens, and select the right model for each task.
Step 1: Benchmark Baseline
async function benchmark(
chain: { invoke: (input: any) => Promise<any> },
input: any,
iterations = 5,
) {
const times: number[] = [];
for (let i = 0; i < iterations; i++) {
const start = performance.now();
await chain.invoke(input);
times.push(performance.now() - start);
}
times.sort((a, b) => a - b);
return {
mean: (times.reduce((a, b) => a + b, 0) / times.length).toFixed(0) + "ms",
median: times[Math.floor(times.length / 2)].toFixed(0) + "ms",
p95: times[Math.floor(times.length * 0.95)].toFixed(0) + "ms",
min: times[0].toFixed(0) + "ms",
max: times[times.length - 1].toFixed(0) + "ms",
};
}
// Usage
const results = await benchmark(chain, { input: "test" }, 10);
console.table(results);
Step 2: Streaming (Perceived Performance)
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const chain = ChatPromptTemplate.fromTemplate("{input}")
.pipe(new ChatOpenAI({ model: "gpt-4o-mini", streaming: true }))
.pipe(new StringOutputParser());
// Non-streaming: user waits 2-3s for full response
// Streaming: first token in ~200ms, user sees progress immediately
const stream = await chain.stream({ input: "Explain LCEL" });
for await (const chunk of stream) {
process.stdout.write(chunk);
}
// Express SSE endpoint for web apps
app.post("/api/chat/stream", async (req, res) => {
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
const stream = await chain.stream({ input: req.body.input });
for await (const chunk of stream) {
res.write(`data: ${JSON.stringify({ text: chunk })}\n\n`);
}
res.write("data: [DONE]\n\n");
res.end();
});
Step 3: Batch Processing with Concurrency
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const chain = ChatPromptTemplate.fromTemplate("Summarize: {text}")
.pipe(new ChatOpenAI({ model: "gpt-4o-mini" }))
.pipe(new StringOutputParser());
const inputs = articles.map((text) => ({ text }));
// Sequential: ~10s for 10 items (1s each)
// const results = [];
// for (const input of inputs) results.push(await chain.invoke(input));
// Batch: ~2s for 10 items (parallel API calls)
const results = await chain.batch(inputs, {
maxConcurrency: 10,
});
// Benchmark comparison
console.time("sequential");
for (const i of inputs.slice(0, 5)) await chain.invoke(i);
console.timeEnd("sequential");
console.time("batch");
await chain.batch(inputs.slice(0, 5), { maxConcurrency: 5 });
console.timeEnd("batch");
Step 4: Caching
// In-memory cache (single process, resets on restart)
const cache = new Map<string, string>();
async function cachedInvoke(
chain: any,
input: Record<string, any>,
): Promise<string> {
const key = JSON.stringify(input);
const cached = cache.get(key);
if (cached) return cached;
const result = await chain.invoke(input);
cache.set(key, result);
return result;
}
// Cache hit: ~0ms (vs ~500-2000ms for API call)
# Python — built-in caching
from langchain_core.globals import set_llm_cache
from langchain_community.cache import SQLiteCache, InMemoryCache
# Option 1: In-memory (single process)
set_llm_cache(InMemoryCache())
# Option 2: SQLite (persistent, survives restarts)
set_llm_cache(SQLiteCache(database_path=".langchain_cache.db"))
# Option 3: Redis (distributed, production)
from langchain_community.cache import RedisCache
import redis
set_llm_cache(RedisCache(redis.Redis.from_url("redis://localhost:6379")))
Step 5: Model Selection by Task
import { ChatOpenAI } from "@langchain/openai";
// Fast + cheap: simple tasks, classification, extraction
const fast = new ChatOpenAI({
model: "gpt-4o-mini", // ~200ms TTFT, $0.15/1M input
temperature: 0,
});
// Powerful + slower: complex reasoning, code generation
const powerful = new ChatOpenAI({
model: "gpt-4o", // ~400ms TTFT, $2.50/1M input
temperature: 0,
});
// Route based on task
import { RunnableBranch } from "@langchain/core/runnables";
const router = RunnableBranch.from([
[(input: any) => input.task === "classify", classifyChain],
[(input: any) => input.task === "reason", reasoningChain],
defaultChain,
]);
Step 6: Prompt Optimization
// Shorter prompts = fewer input tokens = lower latency + cost
// BEFORE (150+ tokens):
const verbose = `You are an expert AI assistant specialized in software
engineering. Your task is to carefully analyze the following code and
provide a comprehensive review covering all aspects including...`;
// AFTER (20 tokens, same quality):
const concise = "Review this code. List issues and fixes:\n\n{code}";
// Token counting (Python)
// import tiktoken
// enc = tiktoken.encoding_for_model("gpt-4o-mini")
// print(len(enc.encode(prompt))) # check before deploying
Performance Impact Summary
| Optimization | Latency Improvement | Cost Impact |
|---|---|---|
| Streaming | First token 80% faster | Neutral |
| Caching | 99% on cache hit | Major savings |
| Batch processing | 50-80% for bulk ops | Neutral |
| gpt-4o-mini vs gpt-4o | ~2x faster TTFT | ~17x cheaper |
| Shorter prompts | 10-30% | 10-50% cheaper |
| maxConcurrency tuning | Linear scaling | Neutral |
Error Handling
| Error | Cause | Fix |
|---|---|---|
| Batch partially fails | Rate limit on some items | Lower maxConcurrency, add maxRetries |
| Stream hangs | Network timeout | Set timeout on model, handle disconnect |
| Cache stale data | Content changed upstream | Add TTL or version key to cache |
| High memory usage | Large cache | Use LRU eviction or Redis |
Resources
Next Steps
Use langchain-cost-tuning for cost optimization alongside performance.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversCloudflare Workers empowers MCP to deploy scalable, low-latency AI services at the network edge for optimal performance.
Optimize Facebook ad campaigns with AI-driven insights, creative analysis, and campaign control in Meta Ads Manager for
Fast, local-first web content extraction for LLMs. Scrape, crawl, extract structured data — all from Rust. CLI, REST API
Use Google Lighthouse to check web page performance and optimize website speed. Try our landing page optimizer for bette
AppSignal: real-time monitoring with incident tracking, anomaly detection, performance metrics and log analysis for fast
Convert text to speech instantly using Rime's API. Enjoy fast, streaming AI voice generation with minimal latency. Try o
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.