openrouter-multi-provider
Execute work with multiple providers through OpenRouter. Use when comparing providers or building provider-agnostic systems. Trigger with phrases like 'openrouter providers', 'openrouter multi-model', 'compare models', 'provider selection'.
Install
mkdir -p .claude/skills/openrouter-multi-provider && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5501" && unzip -o skill.zip -d .claude/skills/openrouter-multi-provider && rm skill.zipInstalls to .claude/skills/openrouter-multi-provider
About this skill
OpenRouter Multi-Provider
Overview
OpenRouter's unified API lets you access models from OpenAI, Anthropic, Google, Meta, Mistral, and others with a single API key and endpoint. Model IDs use provider/model-name format. The same OpenAI SDK code works for any provider by simply changing the model ID. This skill covers provider comparison, cross-provider routing, feature normalization, and BYOK (Bring Your Own Key).
Provider Landscape
# List all providers and their model counts
curl -s https://openrouter.ai/api/v1/models | jq '
[.data[].id | split("/")[0]] |
group_by(.) | map({provider: .[0], models: length}) |
sort_by(-.models)'
Cross-Provider Comparison
import os, time, json
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
default_headers={"HTTP-Referer": "https://my-app.com", "X-Title": "my-app"},
)
def compare_models(prompt: str, models: list[str], max_tokens: int = 500) -> list[dict]:
"""Run the same prompt across multiple models and compare results."""
results = []
for model in models:
start = time.monotonic()
try:
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
max_tokens=max_tokens,
temperature=0,
)
latency = (time.monotonic() - start) * 1000
results.append({
"model": model,
"served_by": response.model,
"content": response.choices[0].message.content[:200] + "...",
"tokens": response.usage.prompt_tokens + response.usage.completion_tokens,
"latency_ms": round(latency, 1),
"status": "ok",
})
except Exception as e:
results.append({"model": model, "status": "error", "error": str(e)})
return results
# Compare top-tier models on the same task
results = compare_models(
"Explain the CAP theorem in distributed systems",
models=[
"anthropic/claude-3.5-sonnet", # Anthropic
"openai/gpt-4o", # OpenAI
"google/gemini-2.0-flash-001", # Google
"meta-llama/llama-3.1-70b-instruct", # Meta (open-source)
],
)
for r in results:
print(f"{r['model']}: {r.get('latency_ms', 'N/A')}ms, {r.get('tokens', 'N/A')} tokens")
Provider Strength Matrix
| Provider | Best For | Example Models | Price Range |
|---|---|---|---|
| Anthropic | Analysis, safety, long context | claude-3.5-sonnet, claude-3-haiku | $0.25-$15/1M |
| OpenAI | Code generation, tool calling | gpt-4o, gpt-4o-mini, o1 | $0.15-$60/1M |
| Multimodal, huge context (1M) | gemini-2.0-flash-001, gemini-pro | $0.075-$7/1M | |
| Meta | Budget tasks, self-hosting | llama-3.1-8b-instruct, llama-3.1-70b-instruct | $0.06-$0.90/1M |
| Mistral | European data residency, code | mistral-large, mixtral-8x7b | $0.24-$8/1M |
Provider-Specific Routing
# Force specific provider for a model
response = client.chat.completions.create(
model="anthropic/claude-3.5-sonnet",
messages=[{"role": "user", "content": "Hello"}],
max_tokens=200,
extra_body={
"provider": {
"order": ["Anthropic"], # Direct to Anthropic
"allow_fallbacks": False, # Don't fall back to other providers
},
},
)
# Cross-provider fallback: if Anthropic is down, try via AWS Bedrock
response = client.chat.completions.create(
model="anthropic/claude-3.5-sonnet",
messages=[{"role": "user", "content": "Hello"}],
max_tokens=200,
extra_body={
"provider": {
"order": ["Anthropic", "AWS Bedrock"],
"allow_fallbacks": True,
},
},
)
BYOK (Bring Your Own Key)
# Use your own provider API key through OpenRouter
# Configure BYOK in the OpenRouter dashboard:
# Settings > Integrations > Add Provider Key
# Benefits:
# - First 1M requests/month free via OpenRouter
# - After that, 5% of normal provider cost (vs full OpenRouter markup)
# - Data flows directly to provider under your account
# - Useful for high-volume production workloads
# With BYOK configured, requests automatically use your provider key
response = client.chat.completions.create(
model="openai/gpt-4o", # Uses YOUR OpenAI key, routed through OpenRouter
messages=[{"role": "user", "content": "Hello"}],
max_tokens=200,
)
Feature Normalization
def normalized_completion(messages, model, **kwargs):
"""Handle provider-specific feature differences."""
# JSON mode: OpenAI native, others via system prompt
if kwargs.pop("json_mode", False):
if model.startswith("openai/"):
kwargs["response_format"] = {"type": "json_object"}
else:
# Add JSON instruction to system prompt for non-OpenAI models
messages = [{"role": "system", "content": "Respond in valid JSON only."}] + [
m for m in messages if m["role"] != "system"
] + [m for m in messages if m["role"] == "system"]
return client.chat.completions.create(model=model, messages=messages, **kwargs)
Error Handling
| Error | Cause | Fix |
|---|---|---|
| Feature not supported | Provider lacks capability (e.g., tools on Llama) | Check model capabilities via /models; use fallback |
| Different response quality | Providers trained differently | Test critical prompts per model; adjust system prompts |
| Provider outage | Single provider down | Use provider.order with fallbacks across providers |
| BYOK auth failure | Provider key expired or invalid | Update provider key in OpenRouter dashboard |
Enterprise Considerations
- OpenRouter normalizes the API, but models differ in output quality, feature support, and data policies
- Use
provider.order+allow_fallbacks: truefor cross-provider resilience - Test the same prompts across providers during evaluation; don't assume equal quality
- BYOK eliminates OpenRouter margin for high-volume workloads (5% vs standard markup)
- Route regulated data only to approved providers using
allow_fallbacks: false - Monitor which provider actually serves each request (
response.model) for attribution
References
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversBoost productivity with Task Master: an AI-powered tool for project management and agile development workflows, integrat
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Claude Context offers semantic code search and indexing with vector embeddings and AST-based code splitting. Natural lan
Unlock browser automation studio with Browserbase MCP Server. Enhance Selenium software testing and AI-driven workflows
Omnisearch unifies search by selecting top providers like Tavily, Brave, and Perplexity for flexible, enhanced content r
AIPo Labs — dynamic search and execute any tools available on ACI.dev for fast, flexible AI-powered workflows.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.