langchain-rate-limits
Implement LangChain rate limiting and backoff strategies. Use when handling API quotas, implementing retry logic, or optimizing request throughput for LLM providers. Trigger with phrases like "langchain rate limit", "langchain throttling", "langchain backoff", "langchain retry", "API quota".
Install
mkdir -p .claude/skills/langchain-rate-limits && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8793" && unzip -o skill.zip -d .claude/skills/langchain-rate-limits && rm skill.zipInstalls to .claude/skills/langchain-rate-limits
About this skill
LangChain Rate Limits
Overview
Handle API rate limits gracefully with built-in retries, exponential backoff, concurrency control, provider fallbacks, and custom rate limiters.
Provider Rate Limits (2026)
| Provider | Model | RPM | TPM |
|---|---|---|---|
| OpenAI | gpt-4o | 10,000 | 800,000 |
| OpenAI | gpt-4o-mini | 10,000 | 4,000,000 |
| Anthropic | claude-sonnet | 4,000 | 400,000 |
| Anthropic | claude-haiku | 4,000 | 400,000 |
| gemini-1.5-pro | 360 | 4,000,000 |
RPM = requests/minute, TPM = tokens/minute. Actual limits depend on your tier.
Strategy 1: Built-in Retry (Simplest)
import { ChatOpenAI } from "@langchain/openai";
// Built-in exponential backoff on 429/500/503
const model = new ChatOpenAI({
model: "gpt-4o-mini",
maxRetries: 5, // retries with exponential backoff
timeout: 30000, // 30s timeout per request
});
// This automatically retries on rate limit errors
const response = await model.invoke("Hello");
Strategy 2: Concurrency-Controlled Batch
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const chain = ChatPromptTemplate.fromTemplate("Summarize: {text}")
.pipe(new ChatOpenAI({ model: "gpt-4o-mini", maxRetries: 3 }))
.pipe(new StringOutputParser());
const inputs = articles.map((text) => ({ text }));
// batch() with maxConcurrency prevents flooding the API
const results = await chain.batch(inputs, {
maxConcurrency: 5, // max 5 parallel requests
});
Strategy 3: Provider Fallback on Rate Limit
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
const primary = new ChatOpenAI({
model: "gpt-4o-mini",
maxRetries: 2,
timeout: 10000,
});
const fallback = new ChatAnthropic({
model: "claude-sonnet-4-20250514",
maxRetries: 2,
});
// Automatically switches to Anthropic if OpenAI rate-limits
const resilientModel = primary.withFallbacks({
fallbacks: [fallback],
});
const chain = prompt.pipe(resilientModel).pipe(new StringOutputParser());
Strategy 4: Custom Rate Limiter
class TokenBucketLimiter {
private tokens: number;
private lastRefill: number;
constructor(
private maxTokens: number, // bucket size
private refillRate: number, // tokens per second
) {
this.tokens = maxTokens;
this.lastRefill = Date.now();
}
async acquire(): Promise<void> {
this.refill();
while (this.tokens < 1) {
const waitMs = (1 / this.refillRate) * 1000;
await new Promise((r) => setTimeout(r, waitMs));
this.refill();
}
this.tokens -= 1;
}
private refill() {
const now = Date.now();
const elapsed = (now - this.lastRefill) / 1000;
this.tokens = Math.min(this.maxTokens, this.tokens + elapsed * this.refillRate);
this.lastRefill = now;
}
}
// Usage: 100 requests per minute
const limiter = new TokenBucketLimiter(100, 100 / 60);
async function rateLimitedInvoke(chain: any, input: any) {
await limiter.acquire();
return chain.invoke(input);
}
Strategy 5: Async Batch with Semaphore
async function batchWithSemaphore<T>(
chain: { invoke: (input: any) => Promise<T> },
inputs: any[],
maxConcurrent = 5,
): Promise<T[]> {
let active = 0;
const results: T[] = [];
const queue = [...inputs.entries()];
return new Promise((resolve, reject) => {
function next() {
while (active < maxConcurrent && queue.length > 0) {
const [index, input] = queue.shift()!;
active++;
chain.invoke(input)
.then((result) => {
results[index] = result;
active--;
if (queue.length === 0 && active === 0) resolve(results);
else next();
})
.catch(reject);
}
}
next();
});
}
// Process 100 items, 5 at a time
const results = await batchWithSemaphore(chain, inputs, 5);
Python Equivalent
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import RunnableConfig
# Built-in retry
llm = ChatOpenAI(model="gpt-4o-mini", max_retries=5, request_timeout=30)
# Fallback
primary = ChatOpenAI(model="gpt-4o-mini", max_retries=2)
fallback = ChatAnthropic(model="claude-sonnet-4-20250514")
robust = primary.with_fallbacks([fallback])
# Batch with concurrency control
results = chain.batch(
[{"text": t} for t in texts],
config=RunnableConfig(max_concurrency=10),
)
Error Handling
| Error | Cause | Fix |
|---|---|---|
429 Too Many Requests | Rate limit hit | Increase maxRetries, reduce maxConcurrency |
Timeout | Response too slow | Increase timeout, check network |
QuotaExceeded | Monthly limit hit | Upgrade tier or switch provider |
| Batch partially fails | Some items rate limited | Use .batch() with returnExceptions: true |
Resources
Next Steps
Proceed to langchain-security-basics for security best practices.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversUnlock seamless Figma to code: streamline Figma to HTML with Framelink MCP Server for fast, accurate design-to-code work
Access official Microsoft Docs instantly for up-to-date info. Integrates with ms word and ms word online for seamless wo
Integrate Feishu (Lark) for seamless document retrieval, messaging, and collaboration via TypeScript CLI or HTTP server
Reddit Buddy offers powerful Reddit API tools for browsing, searching, and data annotation with secure access, rate limi
Reddit Buddy offers clean access to Reddit API, advanced reddit tools, and seamless data annotation reddit with smart ca
Explore Magic UI, a React UI library offering structured component access, code suggestions, and installation guides for
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.