openrouter-caching-strategy
Implement response caching for OpenRouter efficiency. Use when optimizing costs or reducing latency for repeated queries. Trigger with phrases like 'openrouter cache', 'cache llm responses', 'openrouter redis', 'semantic caching'.
Install
mkdir -p .claude/skills/openrouter-caching-strategy && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7583" && unzip -o skill.zip -d .claude/skills/openrouter-caching-strategy && rm skill.zipInstalls to .claude/skills/openrouter-caching-strategy
About this skill
OpenRouter Caching Strategy
Overview
OpenRouter charges per token, so caching identical or similar requests can dramatically cut costs. Deterministic requests (temperature=0) with the same model and messages produce identical outputs -- these are safe to cache. This skill covers in-memory caching, persistent caching with TTL, and Anthropic prompt caching via OpenRouter.
In-Memory Cache
import os, hashlib, json, time
from typing import Optional
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
default_headers={"HTTP-Referer": "https://my-app.com", "X-Title": "my-app"},
)
class LLMCache:
def __init__(self, ttl_seconds: int = 3600):
self._cache: dict[str, tuple[dict, float]] = {}
self._ttl = ttl_seconds
self.hits = 0
self.misses = 0
def _key(self, model: str, messages: list, **kwargs) -> str:
blob = json.dumps({"model": model, "messages": messages, **kwargs}, sort_keys=True)
return hashlib.sha256(blob.encode()).hexdigest()
def get(self, model: str, messages: list, **kwargs) -> Optional[dict]:
k = self._key(model, messages, **kwargs)
if k in self._cache:
data, ts = self._cache[k]
if time.time() - ts < self._ttl:
self.hits += 1
return data
del self._cache[k]
self.misses += 1
return None
def set(self, model: str, messages: list, response: dict, **kwargs):
k = self._key(model, messages, **kwargs)
self._cache[k] = (response, time.time())
cache = LLMCache(ttl_seconds=1800)
def cached_completion(messages, model="anthropic/claude-3.5-sonnet", **kwargs):
"""Only cache deterministic requests (temperature=0)."""
kwargs.setdefault("temperature", 0)
kwargs.setdefault("max_tokens", 1024)
cached = cache.get(model, messages, **kwargs)
if cached:
return cached
response = client.chat.completions.create(model=model, messages=messages, **kwargs)
result = {
"content": response.choices[0].message.content,
"model": response.model,
"usage": {"prompt": response.usage.prompt_tokens, "completion": response.usage.completion_tokens},
}
cache.set(model, messages, result, **kwargs)
return result
Persistent Cache with Redis
import redis, json, hashlib
r = redis.Redis(host="localhost", port=6379, db=0)
def redis_cached_completion(messages, model="openai/gpt-4o-mini", ttl=3600, **kwargs):
"""Cache in Redis with automatic TTL expiry."""
kwargs["temperature"] = 0 # Must be deterministic
key = f"or:{hashlib.sha256(json.dumps({'m': model, 'msgs': messages, **kwargs}, sort_keys=True).encode()).hexdigest()}"
cached = r.get(key)
if cached:
return json.loads(cached)
response = client.chat.completions.create(model=model, messages=messages, **kwargs)
result = {
"content": response.choices[0].message.content,
"model": response.model,
"tokens": response.usage.prompt_tokens + response.usage.completion_tokens,
}
r.setex(key, ttl, json.dumps(result))
return result
Anthropic Prompt Caching via OpenRouter
Anthropic models on OpenRouter support prompt caching -- large system prompts are cached server-side, reducing input cost by 90% on cache hits.
# Mark large static content blocks with cache_control
response = client.chat.completions.create(
model="anthropic/claude-3.5-sonnet",
messages=[
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are an expert. Here is the full source:\n" + large_context,
"cache_control": {"type": "ephemeral"}, # Cache this block
}
],
},
{"role": "user", "content": "What does the main() function do?"},
],
max_tokens=1024,
)
# First call: cache_creation_input_tokens charged at 1.25x
# Subsequent: cache_read_input_tokens charged at 0.1x (90% savings)
Cache Key Design
def cache_key(model: str, messages: list, **params) -> str:
"""Deterministic cache key. Include everything that affects output.
Include: model ID (with variant like :floor), messages, temperature,
max_tokens, top_p, transforms, provider routing.
Exclude: stream (doesn't affect content), HTTP-Referer, X-Title.
"""
canonical = json.dumps({
"model": model, "messages": messages,
"temperature": params.get("temperature", 0),
"max_tokens": params.get("max_tokens"),
"top_p": params.get("top_p"),
}, sort_keys=True)
return hashlib.sha256(canonical.encode()).hexdigest()
Cache Invalidation
| Trigger | Action | Why |
|---|---|---|
| Model version update | Flush keys for that model | New version may give different outputs |
| System prompt change | Flush all keys | Output semantics changed |
| TTL expiry | Automatic eviction | Prevents stale data |
| Manual purge | r.delete(key) or clear by prefix | Debugging or policy change |
Error Handling
| Error | Cause | Fix |
|---|---|---|
| Stale cache response | TTL too long | Reduce TTL or version cache keys |
| Cache miss storm | Cold start or invalidation | Warm cache with common queries at deploy |
| Redis connection error | Redis down | Fall through to direct API call |
| Non-deterministic cache | temperature > 0 cached | Only cache when temperature=0 |
Enterprise Considerations
- Only cache deterministic requests (
temperature=0) -- non-zero temperatures produce different outputs each time - Use Anthropic prompt caching for large system prompts (RAG context) -- 90% cost reduction on cache hits
- Set TTL based on content freshness needs (30 min for dynamic, 24h for reference data)
- Track cache hit rate to justify caching infrastructure cost
- Use Redis or Memcached for multi-instance deployments; in-memory only works for single-process
- Version cache keys when updating system prompts or switching model versions
References
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversAccess official Microsoft Docs instantly for up-to-date info. Integrates with ms word and ms word online for seamless wo
Perplexity Advanced: Dockerized Perplexity CLI and API client — AI chatbot CLI to query Perplexity & OpenRouter, attach
Mindbridge unifies top LLM providers like OpenAI, Anthropic, and Google, enabling easy response comparison and advanced
OpenRouter offers seamless access to diverse AI models for multimodal vision and language, with smart model selection an
Easily search Prisma Cloud docs, including API documentation, for Dropbox shared file help. Find quick, structured resul
Break down complex problems with Sequential Thinking, a structured tool and step by step math solver for dynamic, reflec
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.