perplexity-observability
Set up comprehensive observability for Perplexity integrations with metrics, traces, and alerts. Use when implementing monitoring for Perplexity operations, setting up dashboards, or configuring alerting for Perplexity integration health. Trigger with phrases like "perplexity monitoring", "perplexity metrics", "perplexity observability", "monitor perplexity", "perplexity alerts", "perplexity tracing".
Install
mkdir -p .claude/skills/perplexity-observability && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4784" && unzip -o skill.zip -d .claude/skills/perplexity-observability && rm skill.zipInstalls to .claude/skills/perplexity-observability
About this skill
Perplexity Observability
Overview
Monitor Perplexity Sonar API performance, cost, and quality. Key signals unique to Perplexity: citation count per response (quality indicator), search latency variability (web search is non-deterministic), and per-model cost differences.
Key Metrics
| Metric | sonar (typical) | sonar-pro (typical) | Alert Threshold |
|---|---|---|---|
| Latency p50 | 1-2s | 3-5s | p95 > 15s |
| Citations/response | 3-5 | 5-10 | 0 for 10min |
| Error rate | <1% | <1% | >5% |
| Cost/query | $0.005 | $0.02 | >$0.10 |
Prerequisites
- Perplexity API integration running
- Metrics backend (Prometheus, Datadog, or custom)
- Alerting system configured
Instructions
Step 1: Instrument the Perplexity Client
import OpenAI from "openai";
interface SearchMetrics {
model: string;
latencyMs: number;
status: "success" | "error";
citationCount: number;
totalTokens: number;
cached: boolean;
errorCode?: number;
}
const metrics: SearchMetrics[] = [];
async function instrumentedSearch(
client: OpenAI,
query: string,
model: string = "sonar",
cached: boolean = false
): Promise<{ response: any; metrics: SearchMetrics }> {
const start = performance.now();
let searchMetrics: SearchMetrics;
try {
const response = await client.chat.completions.create({
model,
messages: [{ role: "user", content: query }],
});
searchMetrics = {
model,
latencyMs: performance.now() - start,
status: "success",
citationCount: (response as any).citations?.length || 0,
totalTokens: response.usage?.total_tokens || 0,
cached,
};
metrics.push(searchMetrics);
return { response, metrics: searchMetrics };
} catch (err: any) {
searchMetrics = {
model,
latencyMs: performance.now() - start,
status: "error",
citationCount: 0,
totalTokens: 0,
cached,
errorCode: err.status,
};
metrics.push(searchMetrics);
throw err;
}
}
Step 2: Prometheus Metrics Export
// Export metrics in Prometheus format
function prometheusMetrics(): string {
const lines: string[] = [];
// Latency histogram
lines.push("# HELP perplexity_latency_ms Search response latency");
lines.push("# TYPE perplexity_latency_ms histogram");
// Query counter
const byModel = metrics.reduce((acc, m) => {
const key = `${m.model}_${m.status}`;
acc[key] = (acc[key] || 0) + 1;
return acc;
}, {} as Record<string, number>);
for (const [key, count] of Object.entries(byModel)) {
const [model, status] = key.split("_");
lines.push(`perplexity_queries_total{model="${model}",status="${status}"} ${count}`);
}
// Citation gauge
const recentCitations = metrics.slice(-100).filter((m) => m.status === "success");
const avgCitations = recentCitations.reduce((s, m) => s + m.citationCount, 0) / Math.max(recentCitations.length, 1);
lines.push(`perplexity_avg_citations ${avgCitations.toFixed(1)}`);
// Token counter
const totalTokens = metrics.reduce((s, m) => s + m.totalTokens, 0);
lines.push(`perplexity_tokens_total ${totalTokens}`);
return lines.join("\n");
}
Step 3: Citation Quality Scoring
function evaluateCitationQuality(citations: string[]): {
total: number;
authoritative: number;
qualityScore: number;
} {
const authoritativeTLDs = [".gov", ".edu"];
const authoritativeDomains = ["wikipedia.org", "arxiv.org", "nature.com", "science.org"];
let authoritative = 0;
for (const url of citations) {
const isAuth = authoritativeTLDs.some((tld) => url.includes(tld)) ||
authoritativeDomains.some((d) => url.includes(d));
if (isAuth) authoritative++;
}
return {
total: citations.length,
authoritative,
qualityScore: citations.length > 0 ? authoritative / citations.length : 0,
};
}
Step 4: Cost Tracking
const COST_PER_MILLION_TOKENS: Record<string, { input: number; output: number }> = {
"sonar": { input: 1, output: 1 },
"sonar-pro": { input: 3, output: 15 },
"sonar-reasoning-pro": { input: 3, output: 15 },
"sonar-deep-research": { input: 2, output: 8 },
};
function estimateCost(model: string, usage: { prompt_tokens: number; completion_tokens: number }): number {
const rates = COST_PER_MILLION_TOKENS[model] || COST_PER_MILLION_TOKENS["sonar"];
return (usage.prompt_tokens * rates.input + usage.completion_tokens * rates.output) / 1_000_000;
}
Step 5: Alert Rules (Prometheus/Alertmanager)
groups:
- name: perplexity
rules:
- alert: PerplexityHighLatency
expr: histogram_quantile(0.95, rate(perplexity_latency_ms_bucket[5m])) > 15000
for: 5m
annotations:
summary: "Perplexity P95 latency exceeds 15 seconds"
- alert: PerplexityNoCitations
expr: perplexity_avg_citations == 0
for: 10m
annotations:
summary: "Perplexity returning responses with zero citations"
- alert: PerplexityHighErrorRate
expr: rate(perplexity_queries_total{status="error"}[5m]) / rate(perplexity_queries_total[5m]) > 0.05
for: 5m
annotations:
summary: "Perplexity API error rate exceeds 5%"
- alert: PerplexityCostSpike
expr: increase(perplexity_tokens_total[1h]) > 1000000
annotations:
summary: "Perplexity token usage spike (>1M tokens/hour)"
Dashboard Panels
Track these metrics on your dashboard:
- Query latency by model (sonar vs sonar-pro histogram)
- Citations per response distribution
- Query volume over time (by model)
- Cost per query trend
- Error rate by status code (429 vs 500)
- Cache hit rate
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| High latency on sonar-pro | Complex multi-source search | Expected; use sonar for simple queries |
| Zero citations alert | Vague queries or API issue | Review query patterns |
| Cost spike | Burst of sonar-pro queries | Check for runaway batch jobs |
| Error rate elevated | Rate limiting or API issue | Check for 429s in error breakdown |
Output
- Instrumented Perplexity client with latency/error/citation tracking
- Prometheus metrics export endpoint
- Citation quality scoring
- Cost estimation per query
- Alert rules for latency, errors, and cost
Resources
Next Steps
For incident response, see perplexity-incident-runbook.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversCoroot offers a robust data observability platform with Prometheus process monitoring, software network monitoring, and
The most comprehensive MCP integration platform with 333+ integrations and 20,421+ real-time tools. Connect your AI assi
Desktop Commander MCP unifies code management with advanced source control, git, and svn support—streamlining developmen
Use any LLM for deep research. Performs multi-step web search, content analysis, and synthesis for comprehensive researc
Empower AI with the Exa MCP Server—an AI research tool for real-time web search, academic data, and smarter, up-to-date
Cloudflare Observability offers advanced network monitoring software, delivering insights and trends for smarter network
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.