openrouter-performance-tuning
Optimize OpenRouter performance and latency. Use when reducing response times or improving throughput. Trigger with phrases like 'openrouter performance', 'openrouter latency', 'speed up openrouter', 'openrouter optimization'.
Install
mkdir -p .claude/skills/openrouter-performance-tuning && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4791" && unzip -o skill.zip -d .claude/skills/openrouter-performance-tuning && rm skill.zipInstalls to .claude/skills/openrouter-performance-tuning
About this skill
OpenRouter Performance Tuning
Overview
OpenRouter adds minimal overhead (~50-100ms) to direct provider calls. Most latency comes from the upstream model. Key levers: model selection (smaller = faster), streaming (lower TTFT), parallel requests, prompt size reduction, and provider routing to faster infrastructure. This skill covers benchmarking, streaming optimization, concurrent processing, and connection tuning.
Benchmark Latency
import os, time, statistics
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
default_headers={"HTTP-Referer": "https://my-app.com", "X-Title": "my-app"},
)
def benchmark_model(model: str, prompt: str = "Say hello", n: int = 5) -> dict:
"""Benchmark a model's latency over N requests."""
latencies = []
for _ in range(n):
start = time.monotonic()
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
max_tokens=50,
)
latencies.append((time.monotonic() - start) * 1000)
return {
"model": model,
"p50_ms": round(statistics.median(latencies)),
"p95_ms": round(sorted(latencies)[int(len(latencies) * 0.95)]),
"avg_ms": round(statistics.mean(latencies)),
"min_ms": round(min(latencies)),
"max_ms": round(max(latencies)),
}
# Compare fast vs slow models
for model in ["openai/gpt-4o-mini", "anthropic/claude-3-haiku", "anthropic/claude-3.5-sonnet"]:
result = benchmark_model(model)
print(f"{result['model']}: p50={result['p50_ms']}ms p95={result['p95_ms']}ms")
Streaming for Lower TTFT
def stream_completion(messages, model="openai/gpt-4o-mini", **kwargs):
"""Stream response for lower time-to-first-token."""
start = time.monotonic()
first_token_time = None
full_content = []
stream = client.chat.completions.create(
model=model, messages=messages, stream=True,
stream_options={"include_usage": True}, # Get token counts at end
**kwargs,
)
for chunk in stream:
if chunk.choices and chunk.choices[0].delta.content:
if first_token_time is None:
first_token_time = (time.monotonic() - start) * 1000
full_content.append(chunk.choices[0].delta.content)
total_time = (time.monotonic() - start) * 1000
return {
"content": "".join(full_content),
"ttft_ms": round(first_token_time or 0),
"total_ms": round(total_time),
}
Parallel Request Processing
import asyncio
from openai import AsyncOpenAI
async def parallel_completions(prompts: list[str], model="openai/gpt-4o-mini",
max_concurrent=10, **kwargs):
"""Process multiple prompts concurrently."""
semaphore = asyncio.Semaphore(max_concurrent)
client = AsyncOpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
default_headers={"HTTP-Referer": "https://my-app.com", "X-Title": "my-app"},
)
async def process(prompt):
async with semaphore:
response = await client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
**kwargs,
)
return response.choices[0].message.content
return await asyncio.gather(*[process(p) for p in prompts])
# 10 requests in parallel instead of sequential
results = asyncio.run(parallel_completions(
["Summarize: " + text for text in documents],
max_concurrent=5,
max_tokens=200,
))
Performance Optimization Checklist
| Optimization | Impact | Effort |
|---|---|---|
| Use streaming | TTFT drops 2-10x | Low |
| Use smaller models for simple tasks | 2-5x faster | Low |
| Reduce prompt size | Proportional to reduction | Medium |
Set max_tokens | Caps response time | Low |
| Parallel requests | N requests in ~1 request time | Medium |
Use :nitro variant | Faster inference (where available) | Low |
| Provider routing to fastest | 10-30% latency reduction | Low |
| Connection keep-alive | Saves TCP/TLS handshake | Low |
Model Speed Tiers
| Speed | Models | Typical TTFT |
|---|---|---|
| Fastest | openai/gpt-4o-mini, anthropic/claude-3-haiku | 200-500ms |
| Fast | openai/gpt-4o, google/gemini-2.0-flash-001 | 500ms-1s |
| Standard | anthropic/claude-3.5-sonnet | 1-3s |
| Slow | openai/o1, reasoning models | 5-30s |
Connection Optimization
# Reuse client instance (connection pooling)
# BAD: creating new client per request
for prompt in prompts:
c = OpenAI(base_url="https://openrouter.ai/api/v1", ...) # New TCP connection each time
c.chat.completions.create(...)
# GOOD: reuse single client
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
timeout=30.0, # Set appropriate timeout
max_retries=2, # Built-in retry with backoff
default_headers={"HTTP-Referer": "https://my-app.com", "X-Title": "my-app"},
)
for prompt in prompts:
client.chat.completions.create(...) # Reuses HTTP connection
Error Handling
| Error | Cause | Fix |
|---|---|---|
| High TTFT (>5s) | Model cold-starting or overloaded | Switch to :nitro variant or different provider |
| Timeout errors | max_tokens too high or model too slow | Reduce max_tokens; use streaming; increase timeout |
| Throughput bottleneck | Sequential processing | Use async + semaphore for concurrent requests |
| Inconsistent latency | Provider load varies | Use provider.order to pin to fastest provider |
Enterprise Considerations
- Benchmark models in your infrastructure, not just locally -- network path matters
- Use streaming for all user-facing requests to minimize perceived latency
- Set
max_tokenson every request to bound response time and cost - Reuse client instances to benefit from HTTP connection pooling
- Use
asyncio.Semaphoreto control concurrency and avoid overwhelming the API - Monitor P95 latency, not just average -- tail latencies indicate provider issues
- Consider
:nitromodel variants for latency-critical paths
References
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversCloudflare Workers empowers MCP to deploy scalable, low-latency AI services at the network edge for optimal performance.
Optimize Facebook ad campaigns with AI-driven insights, creative analysis, and campaign control in Meta Ads Manager for
Use Google Lighthouse to check web page performance and optimize website speed. Try our landing page optimizer for bette
Convert text to speech instantly using Rime's API. Enjoy fast, streaming AI voice generation with minimal latency. Try o
V-Rapper lets you instantly access Evan You's VueConf 2025 rap video on Bilibili—get the video URL in one simple query.
Ignission — TikTok analytics and content strategy tools to grow engagement, optimize posts, and track performance with a
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.