langchain-cost-tuning
Optimize LangChain API costs and token usage. Use when reducing LLM API expenses, implementing cost controls, or optimizing token consumption in production. Trigger with phrases like "langchain cost", "langchain tokens", "reduce langchain cost", "langchain billing", "langchain budget".
Install
mkdir -p .claude/skills/langchain-cost-tuning && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6358" && unzip -o skill.zip -d .claude/skills/langchain-cost-tuning && rm skill.zipInstalls to .claude/skills/langchain-cost-tuning
About this skill
LangChain Cost Tuning
Overview
Reduce LLM API costs while maintaining quality: token tracking callbacks, model tiering (route simple tasks to cheap models), caching for duplicate queries, prompt compression, and budget enforcement.
Current Pricing Reference (2026)
| Provider | Model | Input $/1M | Output $/1M |
|---|---|---|---|
| OpenAI | gpt-4o | $2.50 | $10.00 |
| OpenAI | gpt-4o-mini | $0.15 | $0.60 |
| Anthropic | claude-sonnet | $3.00 | $15.00 |
| Anthropic | claude-haiku | $0.25 | $1.25 |
| OpenAI | text-embedding-3-small | $0.02 | - |
Strategy 1: Token Usage Tracking
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";
const MODEL_PRICING: Record<string, { input: number; output: number }> = {
"gpt-4o": { input: 2.5, output: 10.0 },
"gpt-4o-mini": { input: 0.15, output: 0.6 },
};
class CostTracker extends BaseCallbackHandler {
name = "CostTracker";
totalCost = 0;
totalTokens = 0;
calls = 0;
handleLLMEnd(output: any) {
this.calls++;
const usage = output.llmOutput?.tokenUsage;
if (!usage) return;
const model = "gpt-4o-mini"; // extract from output metadata
const pricing = MODEL_PRICING[model] ?? MODEL_PRICING["gpt-4o-mini"];
const inputCost = (usage.promptTokens / 1_000_000) * pricing.input;
const outputCost = (usage.completionTokens / 1_000_000) * pricing.output;
this.totalTokens += usage.totalTokens;
this.totalCost += inputCost + outputCost;
}
report() {
return {
calls: this.calls,
totalTokens: this.totalTokens,
totalCost: `$${this.totalCost.toFixed(4)}`,
avgCostPerCall: `$${(this.totalCost / Math.max(this.calls, 1)).toFixed(4)}`,
};
}
}
const tracker = new CostTracker();
const model = new ChatOpenAI({
model: "gpt-4o-mini",
callbacks: [tracker],
});
// After operations:
console.table(tracker.report());
Strategy 2: Model Tiering (Route by Complexity)
import { ChatOpenAI } from "@langchain/openai";
import { RunnableBranch } from "@langchain/core/runnables";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const cheapModel = new ChatOpenAI({ model: "gpt-4o-mini" }); // $0.15/1M in
const powerModel = new ChatOpenAI({ model: "gpt-4o" }); // $2.50/1M in
const simplePrompt = ChatPromptTemplate.fromTemplate("{input}");
const complexPrompt = ChatPromptTemplate.fromTemplate(
"Think step by step. {input}"
);
function isComplex(input: { input: string }): boolean {
const text = input.input;
// Heuristic: long input, requires reasoning, or multi-step
return (
text.length > 500 ||
/\b(analyze|compare|evaluate|design|architect)\b/i.test(text)
);
}
const router = RunnableBranch.from([
[isComplex, complexPrompt.pipe(powerModel).pipe(new StringOutputParser())],
simplePrompt.pipe(cheapModel).pipe(new StringOutputParser()),
]);
// Simple question -> gpt-4o-mini ($0.15/1M)
await router.invoke({ input: "What is 2+2?" });
// Complex question -> gpt-4o ($2.50/1M)
await router.invoke({ input: "Analyze the trade-offs between microservices..." });
Strategy 3: Caching (Eliminate Duplicate Calls)
# Python — LangChain has built-in caching
from langchain_openai import ChatOpenAI
from langchain_core.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
# Persistent cache — identical prompts skip the API entirely
set_llm_cache(SQLiteCache(database_path=".langchain_cache.db"))
llm = ChatOpenAI(model="gpt-4o-mini")
# First call: API hit (~500ms, costs tokens)
llm.invoke("What is LCEL?")
# Second identical call: cache hit (~0ms, $0.00)
llm.invoke("What is LCEL?")
// TypeScript — manual cache with Map
const cache = new Map<string, string>();
async function cachedInvoke(chain: any, input: Record<string, any>) {
const key = JSON.stringify(input);
if (cache.has(key)) return cache.get(key)!;
const result = await chain.invoke(input);
cache.set(key, result);
return result;
}
Strategy 4: Prompt Compression
// Shorter prompts = fewer input tokens = lower cost
// Before: 150 tokens
const verbose = ChatPromptTemplate.fromTemplate(`
You are an expert AI assistant specialized in software engineering.
Your task is to carefully analyze the following text and provide
a comprehensive summary that captures all the key points and
important details. Please ensure your summary is accurate and well-structured.
Text to summarize: {text}
Please provide your summary below:
`);
// After: 25 tokens (same quality with good models)
const concise = ChatPromptTemplate.fromTemplate(
"Summarize the key points:\n\n{text}"
);
Strategy 5: Budget Enforcement
class BudgetEnforcer extends BaseCallbackHandler {
name = "BudgetEnforcer";
private spent = 0;
constructor(private budgetUSD: number) {
super();
}
handleLLMStart() {
if (this.spent >= this.budgetUSD) {
throw new Error(
`Budget exceeded: $${this.spent.toFixed(2)} / $${this.budgetUSD}`
);
}
}
handleLLMEnd(output: any) {
const usage = output.llmOutput?.tokenUsage;
if (usage) {
// Estimate cost (adjust per model)
this.spent += (usage.totalTokens / 1_000_000) * 0.60;
}
}
remaining() {
return `$${(this.budgetUSD - this.spent).toFixed(2)} remaining`;
}
}
const budget = new BudgetEnforcer(10.0); // $10 daily budget
const model = new ChatOpenAI({
model: "gpt-4o-mini",
callbacks: [budget],
});
Cost Optimization Checklist
| Optimization | Savings | Effort |
|---|---|---|
| Use gpt-4o-mini instead of gpt-4o | ~17x cheaper | Low |
| Cache identical requests | 100% on cache hits | Low |
| Shorten prompts | 10-50% | Medium |
| Model tiering (route by complexity) | 50-80% | Medium |
| Batch processing (fewer round-trips) | 10-20% | Low |
| Budget enforcement | Prevents surprises | Low |
Error Handling
| Issue | Cause | Fix |
|---|---|---|
| Budget exceeded error | Daily limit hit | Increase budget or optimize usage |
| Cache misses | Input varies slightly | Normalize inputs before caching |
| Wrong model selected | Routing logic too simple | Improve complexity classifier |
Resources
Next Steps
Use langchain-performance-tuning to optimize latency alongside cost.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversFunnel is a TypeScript proxy server that aggregates MCP servers, intelligently filtering tools to optimize context token
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Boost AI coding agents with Ref Tools—efficient documentation access for faster, smarter code generation than GitHub Cop
Ultra (Multi-AI Provider) unifies OpenAI, Gemini, and Azure models, tracking usage, estimating costs, and offering 9 dev
Tool Chainer connects multiple MCP tools, streamlining complex workflows efficiently with reduced token usage.
Summarization provides efficient text summarization and AI summarizer tools to process large datasets, including YouTube
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.