openrouter-model-availability
Build check model availability and implement fallback chains. Use when building resilient systems or handling model outages. Trigger with phrases like 'openrouter availability', 'openrouter fallback', 'openrouter model down', 'openrouter health check'.
Install
mkdir -p .claude/skills/openrouter-model-availability && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8568" && unzip -o skill.zip -d .claude/skills/openrouter-model-availability && rm skill.zipInstalls to .claude/skills/openrouter-model-availability
About this skill
OpenRouter Model Availability
Overview
OpenRouter's /api/v1/models endpoint is the source of truth for model availability. Models can be temporarily unavailable, have degraded performance, or be permanently removed. This skill covers querying model status, building health probes, tracking availability over time, and automating failover.
Query Model Status
# Check if specific models exist and their status
curl -s https://openrouter.ai/api/v1/models | jq '[.data[] | select(
.id == "anthropic/claude-3.5-sonnet" or
.id == "openai/gpt-4o" or
.id == "openai/gpt-4o-mini"
) | {
id,
context_length,
prompt_per_M: ((.pricing.prompt | tonumber) * 1000000),
completion_per_M: ((.pricing.completion | tonumber) * 1000000)
}]'
# List all available models (just IDs)
curl -s https://openrouter.ai/api/v1/models | jq '[.data[].id] | sort'
# Count models by provider
curl -s https://openrouter.ai/api/v1/models | jq '[.data[].id | split("/")[0]] | group_by(.) | map({provider: .[0], count: length}) | sort_by(-.count)'
Health Check Service
import os, time, logging
from datetime import datetime, timezone
from dataclasses import dataclass
import requests
from openai import OpenAI, APIError, APITimeoutError
log = logging.getLogger("openrouter.health")
@dataclass
class HealthStatus:
model: str
available: bool
latency_ms: float
checked_at: str
error: str = ""
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
timeout=15.0,
default_headers={"HTTP-Referer": "https://my-app.com", "X-Title": "health-check"},
)
def probe_model(model_id: str) -> HealthStatus:
"""Send a minimal request to test model availability."""
start = time.monotonic()
try:
response = client.chat.completions.create(
model=model_id,
messages=[{"role": "user", "content": "hi"}],
max_tokens=1, # Minimal cost
)
latency = (time.monotonic() - start) * 1000
return HealthStatus(
model=model_id, available=True, latency_ms=round(latency, 1),
checked_at=datetime.now(timezone.utc).isoformat(),
)
except (APIError, APITimeoutError) as e:
latency = (time.monotonic() - start) * 1000
return HealthStatus(
model=model_id, available=False, latency_ms=round(latency, 1),
checked_at=datetime.now(timezone.utc).isoformat(),
error=str(e),
)
def check_critical_models() -> list[HealthStatus]:
"""Probe all critical models."""
CRITICAL_MODELS = [
"anthropic/claude-3.5-sonnet",
"openai/gpt-4o",
"openai/gpt-4o-mini",
"google/gemini-2.0-flash-001",
]
results = []
for model in CRITICAL_MODELS:
status = probe_model(model)
log.info(f"{'OK' if status.available else 'FAIL'} {model} ({status.latency_ms}ms)")
results.append(status)
return results
Catalog-Based Availability Check
def check_model_exists(model_id: str) -> dict:
"""Check if a model exists in the catalog (no API call cost)."""
resp = requests.get("https://openrouter.ai/api/v1/models")
models = {m["id"]: m for m in resp.json()["data"]}
if model_id in models:
m = models[model_id]
return {
"exists": True,
"context_length": m["context_length"],
"pricing": m["pricing"],
}
return {"exists": False, "suggestion": find_similar(model_id, models)}
def find_similar(model_id: str, models: dict) -> list[str]:
"""Find models with similar names (for migration when model is removed)."""
prefix = model_id.split("/")[0]
return [m for m in models if m.startswith(prefix)][:5]
Availability Monitoring Script
#!/bin/bash
# Run as cron job: */5 * * * * /path/to/check_models.sh
MODELS=("anthropic/claude-3.5-sonnet" "openai/gpt-4o" "openai/gpt-4o-mini")
LOG_FILE="/var/log/openrouter-health.log"
for MODEL in "${MODELS[@]}"; do
START=$(date +%s%N)
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
https://openrouter.ai/api/v1/chat/completions \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d "{\"model\":\"$MODEL\",\"messages\":[{\"role\":\"user\",\"content\":\"ping\"}],\"max_tokens\":1}" \
--max-time 15)
END=$(date +%s%N)
LATENCY=$(( (END - START) / 1000000 ))
STATUS="OK"
[ "$HTTP_CODE" != "200" ] && STATUS="FAIL"
echo "$(date -u +%Y-%m-%dT%H:%M:%SZ) $STATUS $MODEL $HTTP_CODE ${LATENCY}ms" >> "$LOG_FILE"
done
Error Handling
| Error | Cause | Fix |
|---|---|---|
| Model not in catalog | Model renamed or removed | Use find_similar() to find replacement |
| Health check timeout (>15s) | Model overloaded or cold-starting | Distinguish slow vs down; increase timeout for probes |
| False positive down | Transient network issue | Require 2-3 consecutive failures before alerting |
| 402 on health check | Credits exhausted | Health checks cost ~$0.0001 each; ensure adequate credits |
Enterprise Considerations
- Health probes cost tokens ($0.0001 or less per probe with
max_tokens: 1) -- budget for monitoring - Require 2-3 consecutive failures before marking a model as down to avoid false positives
- Cache the models list and refresh every 5 minutes -- don't hit
/api/v1/modelson every request - Subscribe to OpenRouter announcements for model deprecations and new additions
- Maintain a model alias map so your code uses logical names (e.g., "primary-chat") that you can remap
- Alert when critical models disappear from the catalog, not just when they fail probes
References
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversBuild persistent semantic networks for enterprise & engineering data management. Enable data persistence and memory acro
Boost productivity with Task Master: an AI-powered tool for project management and agile development workflows, integrat
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Find official MCP servers for Google Maps. Explore resources to build, integrate, and extend apps with Google directions
Explore official Google BigQuery MCP servers. Find resources and examples to build context-aware apps in Google's ecosys
Explore MCP servers for Google Compute Engine. Integrate model context protocol solutions to streamline GCE app developm
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.