Claude Perplexity skill: 10 search-augmented LLM recipes
Ten search-augmented recipes — daily news brief, competitor scan, ticker fundamentals, claim verification, auto-citation report, cite-as-you-write, content moderation triage, streaming CLI, domain-filtered search, multi-step query chain — each as a single Claude prompt against the Perplexity Sonar API.
Already know what skills are? Skip to the cookbook. First time? Read the explainer then come back. Need the install? It’s on the /skills/perplexity page.

On this page · 20 sections▾
- What this skill does
- The cookbook
- Install + README
- 01 · Daily news brief with citations
- 02 · Competitor research filtered to last 30 days
- 03 · Stock-ticker fundamentals lookup
- 04 · Verify a claim against multiple sources
- 05 · Build a research report with auto-citations
- 06 · Cite-as-you-write (RAG-light alternative)
- 07 · Quick fact-check pipeline for content moderation
- 08 · Streaming answer in a CLI tool
- 09 · Domain-filtered search (only *.gov or *.edu)
- 10 · Multi-step query chain (broad → narrow → verify)
- Community signal
- The contrarian take
- Real pipelines shipped
- Gotchas
- Pairs well with
- FAQ
- Sources
What this skill actually does
Sixty seconds of context before the cookbook — what the Perplexity skill is, what Claude returns when you invoke it, and the one thing it does NOT do for you.
What this skill actually does
“Web search and research using Perplexity AI. Use when user says 'search', 'find', 'look up', 'ask', 'research', or 'what's the latest' for generic queries.”
— davila7 · perplexity SKILL.md · /skills/perplexity
What Claude returns
Wraps the Perplexity Sonar API (POST https://api.perplexity.ai/chat/completions) and exposes two routes inside Claude: perplexity_search returns ranked URLs with snippets (max_results defaults to 3, max_tokens_per_page to 512 to keep context lean), and perplexity_ask returns a synthesised conversational answer. Both surface the upstream choices[].message.content plus the citations[] URL array, so every claim is anchored. Models available: sonar, sonar-pro, sonar-reasoning-pro, sonar-deep-research. Filters supported: search_recency_filter (hour/day/week/month/year), search_domain_filter (allow- or deny-list), reasoning_effort, return_images, return_related_questions, and stream.
What it does NOT do
It does not provision a Perplexity API key — you still need PERPLEXITY_API_KEY in the shell, and the Sonar tier you call against ($5–$14 per 1K queries plus per-token costs) before any prompt will run; the skill is the routing layer, not the auth layer.
How you trigger it
search for the latest on …find primary sources for …look up what's happened with … this weekCost when idle
About 110 tokens of skill metadata stay loaded per turn. The full SKILL.md routing chain (Context7 → Graphite MCP → Nx MCP → Perplexity) only fires on a search-shaped trigger, so day-to-day chat cost is unchanged.
One naming note. The Perplexity skill in this catalog is the routing layer: it decides whether to call mcp__perplexity__perplexity_search (URL list), mcp__perplexity__perplexity_ask (synthesised answer), or hand off to a different tool entirely (Context7 for library docs, Graphite for gt, Nx for workspace questions). The cookbook below assumes you have a PERPLEXITY_API_KEY on hand and shows the underlying Sonar API surface that both routes wrap, so the recipes work whether you call them through Claude or directly from a script.
The cookbook
Each entry below is one Sonar pipeline you could ship today. They run in the order I’d teach them — the early ones (news brief, competitor scan, ticker lookup) lean on the base Sonar model, the middle ones (verification, deep research, cite-as-you-write) reach for sonar-pro and sonar-reasoning-pro, and the later ones (domain-filtered, CLI streaming, multi-step) compose the API primitives the way production teams actually do. Every entry pairs with one or two skills or MCP servers you already have on mcp.directory.
Install + README
If the skill isn’t on your machine yet, here’s the one-liner. The full install panel (Codex, Copilot, Antigravity variants) is on the skill page. You also need a PERPLEXITY_API_KEY from perplexity.ai/settings/api with credit on the Sonar billing tier before any of the cookbook prompts will run.
One-line install · by davila7
Open skill pageInstall
mkdir -p .claude/skills/perplexity && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2091" && unzip -o skill.zip -d .claude/skills/perplexity && rm skill.zipInstalls to .claude/skills/perplexity
Daily news brief with citations
A 7-bullet morning brief on a beat (e.g. 'AI agents this week') with every claim citation-anchored to a public URL.
ForOperators and analysts who open the same five tabs every morning at 8am.
The prompt
Use the perplexity skill. Call sonar-pro with search_recency_filter="day" and ask for a 7-bullet brief on the topic 'AI agents shipping this week'. Each bullet must end with a [n] marker tied to the citations[] array the API returns. Save the rendered markdown to ~/notes/daily-ai-agents-{today}.md and print the citation list at the bottom. Skip anything older than 24 hours.What slides.md looks like
$ curl -s https://api.perplexity.ai/chat/completions \
-H "Authorization: Bearer $PERPLEXITY_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"sonar-pro",
"messages":[{"role":"user","content":"7-bullet brief on AI agents shipping this week, each bullet ends [n] tied to citations"}],
"search_recency_filter":"day"}' | jq '.choices[0].message.content, .citations'
→ choices[0].message.content: 7 bullets, each with a [1]–[7] marker
→ citations: ["https://...", ...] (one URL per marker)One-line tweak
Switch search_recency_filter to "week" on Mondays and ask for 14 bullets — the same prompt produces a weekend digest with one round-trip.
Competitor research filtered to last 30 days
A structured competitor scan — pricing changes, new features, hiring signals — over a fixed 30-day window.
ForPMMs and founders running monthly competitive standups instead of weekly Slack scrolls.
The prompt
Use the perplexity skill. For each competitor in @competitors.txt, call sonar-pro with search_recency_filter="month" and a structured prompt: 'pricing changes, new features shipped, hiring signals (open roles), funding events.' Return one row per competitor with explicit None markers when a category is empty. Save to docs/competitive-2026-04.md.What slides.md looks like
import requests, json
for c in open('competitors.txt').read().splitlines():
r = requests.post('https://api.perplexity.ai/chat/completions',
headers={'Authorization': f'Bearer {KEY}'},
json={'model':'sonar-pro',
'messages':[{'role':'user','content':
f'For {c}: pricing changes, new features, hiring, funding — last 30d. Mark None when empty.'}],
'search_recency_filter':'month'}).json()
print(c, '|', r['choices'][0]['message']['content'])One-line tweak
Add return_related_questions=true and you also get a 'questions a buyer would ask about this competitor' list — useful for the next competitive battlecard.
Stock-ticker fundamentals lookup
Pull current price, last-quarter EPS, and three recent material headlines for a ticker, with sources.
ForRetail investors and finance interns who used to bounce between Yahoo Finance and a news tab.
The prompt
Use the perplexity skill. Call sonar with model='sonar' (not Pro — fundamentals are stable enough), prompt: 'For ticker {SYMBOL}: current price, last-quarter EPS surprise vs consensus, three material headlines from the past 7 days. Cite each fact.' Add search_domain_filter=["reuters.com","bloomberg.com","sec.gov","wsj.com"] so we only quote primary sources.What slides.md looks like
$ curl -s https://api.perplexity.ai/chat/completions \
-H "Authorization: Bearer $PERPLEXITY_API_KEY" \
-d '{"model":"sonar",
"messages":[{"role":"user","content":
"For NVDA: current price, last-quarter EPS surprise, 3 material headlines past 7d. Cite each fact."}],
"search_domain_filter":["reuters.com","bloomberg.com","sec.gov","wsj.com"],
"search_recency_filter":"week"}'
→ message.content: price + EPS line + 3 headlines, [1]–[5] markers
→ citations: 5 URLs, all from the four allow-listed domainsOne-line tweak
Drop the domain filter for crypto tickers — the SEC won't carry the news, and you'll get nothing back.
Verify a claim against multiple sources
Take a single contested sentence (a headline, a tweet, a customer assertion) and produce a verdict — confirmed, contradicted, mixed — with the citation list behind the verdict.
ForNewsroom fact-checkers and content moderators tired of opening five tabs to confirm one sentence.
The prompt
Use the perplexity skill. Given the claim '{CLAIM}', call sonar-reasoning-pro with reasoning_effort='medium' and ask: 'Verify this claim. Return a JSON object with verdict (CONFIRMED, CONTRADICTED, MIXED, UNVERIFIABLE), confidence (0–1), and an evidence array of {source_url, supports_or_contradicts, quote} entries.' Keep at least three sources before returning a non-UNVERIFIABLE verdict.What slides.md looks like
import requests, json
claim = "OpenAI cut sonar prices by 50% in March 2026."
r = requests.post('https://api.perplexity.ai/chat/completions',
headers={'Authorization': f'Bearer {KEY}'},
json={'model':'sonar-reasoning-pro',
'reasoning_effort':'medium',
'messages':[{'role':'user','content':
f'Verify: {claim!r}. Return JSON: verdict, confidence, evidence[].'}],
'search_recency_filter':'month'}).json()
print(json.loads(r['choices'][0]['message']['content']))One-line tweak
Set reasoning_effort='high' on adversarial claims — the extra tokens are 4–6x cheaper than a wrong verdict downstream.
Build a research report with auto-citations
A single 1500-word report on a topic, with inline [n] markers and a working bibliography appended — produced by one prompt.
ForAnalysts, students, and writers who currently spend more time formatting citations than writing.
The prompt
Use the perplexity skill. Call sonar-deep-research with the question 'Write a 1500-word report on the state of agentic browsers in 2026. Cover security, market entrants, real adoption, regulatory gestures.' Save the markdown to reports/agentic-browsers-2026.md and append the citations[] array as a bibliography. Verify every [n] marker in the body resolves to a URL.What slides.md looks like
$ curl -s https://api.perplexity.ai/chat/completions \
-H "Authorization: Bearer $PERPLEXITY_API_KEY" \
-d '{"model":"sonar-deep-research",
"messages":[{"role":"user","content":
"1500-word report: state of agentic browsers 2026. Security, entrants, adoption, regulation."}]}'
→ choices[0].message.content: 1500w with [1]…[18] markers
→ citations: 18 URLs (book + 6 vendor blogs + 11 news pieces)
# cost: ~$2 per deep-research call (token-heavy + per-search fees)One-line tweak
If the bibliography duplicates a domain three times, post-process with a one-shot 'consolidate the [n] markers' prompt — Sonar Deep Research over-cites by design.
Cite-as-you-write (RAG-light alternative)
While drafting a longform piece, intercept every empirical claim with a one-line Sonar lookup and inject the citation inline — a lightweight alternative to a full RAG stack.
ForWriters and researchers whose 'lookup tax' kills momentum every paragraph.
The prompt
Use the perplexity skill. As I write, when I type a sentence ending with `??`, replace the `??` with the closest source URL the sonar model returns. Use model='sonar' (low latency), search_recency_filter='year', and only inject when confidence is high. Keep the original sentence intact; append the citation in markdown footnote syntax [^n].What slides.md looks like
# Streaming hook around your editor (Neovim, Cursor, etc.)
# Trigger on save: scan the buffer for sentences ending '??'
for sent in re.findall(r'[^.]+\?\?', buffer):
r = requests.post('https://api.perplexity.ai/chat/completions',
headers={'Authorization': f'Bearer {KEY}'},
json={'model':'sonar',
'messages':[{'role':'user','content':
f'Closest authoritative source URL for this claim: {sent!r}'}]})
cite = r.json()['citations'][0]
buffer = buffer.replace(sent, sent.replace('??', f'[^{i}]') + f'\n[^{i}]: {cite}')One-line tweak
Cap the lookup at one Sonar call per paragraph — sentence-level lookups burn $5 per 1K queries on a long draft.
Quick fact-check pipeline for content moderation
Triage a queue of user-generated assertions (forum posts, support tickets) and tag each with a Sonar-derived risk score before a human reviews.
ForTrust & safety teams handling more queue than review-bandwidth.
The prompt
Use the perplexity skill. For each row in queue.csv, call sonar with: 'Is the following statement factually consistent with current public sources? Return one of {LIKELY_TRUE, LIKELY_FALSE, UNVERIFIABLE} and a confidence 0–1.' Append the verdict and the top citation URL back to the CSV. Rate-limit at 3 req/s (Sonar's free-tier ceiling).What slides.md looks like
import csv, requests, time
with open('queue.csv') as f, open('queue.out.csv','w') as g:
rows = list(csv.DictReader(f))
for r in rows:
resp = requests.post('https://api.perplexity.ai/chat/completions',
headers={'Authorization': f'Bearer {KEY}'},
json={'model':'sonar',
'messages':[{'role':'user','content':
f'Factually consistent? {r["text"]!r} → {{verdict, confidence}}'}]}).json()
r['verdict'] = resp['choices'][0]['message']['content']
r['top_cite'] = (resp.get('citations') or [None])[0]
time.sleep(0.34) # ~3 req/s
csv.DictWriter(g, fieldnames=rows[0].keys()).writerows(rows)One-line tweak
Push UNVERIFIABLE and LIKELY_FALSE rows to the human queue first; auto-approve LIKELY_TRUE with confidence > 0.85 if your policy allows.
Streaming answer in a CLI tool
A `pq <question>` CLI that streams a Sonar-Pro answer to stdout token-by-token, then prints citations at the bottom — under 50 lines of Python.
ForTerminal-native users who want Perplexity without a browser tab.
The prompt
Use the perplexity skill. Build a `pq` CLI: takes a single question argument, calls sonar-pro with stream=true, prints content tokens to stdout as they arrive, then prints the citations[] array under a 'Sources:' header. Handle Ctrl-C cleanly. Save as ~/bin/pq and chmod +x.What slides.md looks like
#!/usr/bin/env python3
import sys, json, requests
q = ' '.join(sys.argv[1:])
r = requests.post('https://api.perplexity.ai/chat/completions',
headers={'Authorization': f'Bearer {KEY}'},
json={'model':'sonar-pro','stream':True,
'messages':[{'role':'user','content':q}]}, stream=True)
cites = []
for line in r.iter_lines():
if not line.startswith(b'data: '): continue
chunk = json.loads(line[6:])
sys.stdout.write(chunk['choices'][0]['delta'].get('content',''))
sys.stdout.flush()
cites = chunk.get('citations') or cites
print('\n\nSources:'); [print(f' - {c}') for c in cites]One-line tweak
Pipe `pq` into `glow` or `bat -l md` and you have a live-rendered Markdown answer in the terminal — same single round-trip.
Domain-filtered search (only *.gov or *.edu)
Force Sonar to only quote primary, authoritative sources — useful for legal, medical, or academic answers where Reddit and Medium are noise.
ForLawyers, clinicians, researchers, and journalists who need defensible citations.
The prompt
Use the perplexity skill. Call sonar-pro with search_domain_filter=["*.gov","*.edu","who.int"] for medical questions, or ["*.gov","law.cornell.edu","justia.com","sec.gov"] for legal. If the resulting citations[] array has fewer than 3 entries, retry once without the filter and tag the answer LOW_AUTHORITY in the output.What slides.md looks like
$ curl -s https://api.perplexity.ai/chat/completions \
-H "Authorization: Bearer $PERPLEXITY_API_KEY" \
-d '{"model":"sonar-pro",
"messages":[{"role":"user","content":
"Current FDA stance on at-home semaglutide compounding. Cite primary sources."}],
"search_domain_filter":["*.gov","*.edu","who.int"]}'
→ citations: 4 URLs, all on fda.gov / nih.gov / cdc.gov
→ message.content: synthesised answer, [1]–[4] markers tied to citationsOne-line tweak
Prefix a domain with `-` to exclude it: `["-reddit.com", "-medium.com"]` keeps everything else in scope but drops the noise floor.
Multi-step query chain (broad → narrow → verify)
Three Sonar calls in sequence: a broad scan, a focused follow-up on the strongest hit, and a verification pass that cross-checks the focused answer against a different model.
ForAnyone whose 'one-shot' Sonar answer keeps coming back surface-level.
The prompt
Use the perplexity skill. Step 1: call sonar with the broad question, return the top 5 themes. Step 2: pick the highest-confidence theme, call sonar-pro with a focused follow-up that quotes the theme verbatim. Step 3: call sonar-reasoning-pro with the focused answer plus 'Identify any factual claims and verify each independently. Flag disagreements.' Save the three transcripts as steps-1.md, 2.md, 3.md.What slides.md looks like
# Step 1 — broad scan (model: sonar)
themes = ask('sonar', 'Top 5 themes in agentic browser security 2026.')
# Step 2 — focused follow-up on theme[0] (model: sonar-pro)
deep = ask('sonar-pro', f'Focused on theme {themes[0]!r}: who has shipped, what broke?')
# Step 3 — verification pass (model: sonar-reasoning-pro, reasoning_effort=high)
verify = ask('sonar-reasoning-pro', f'Identify and verify each factual claim: {deep!r}',
reasoning_effort='high')
open('steps-3.md','w').write(verify)One-line tweak
If Step 3 surfaces a disagreement, rerun Step 2 with search_domain_filter narrowed to the disagreeing source's competitors — you usually find the missing context in one extra call.
Community signal
Three voices from people running Sonar against real workloads. The first is the daily-driver endorsement, the second is the head-to-head pricing comparison every team eventually runs, and the third is the citations-as-verification framing that makes Perplexity defensible in regulated contexts.
“I'm increasingly impressed with Perplexity.ai—I'm using it on a daily basis now. It's by far the best implementation I've seen of LLM-assisted search.”
Simon Willison · Blog
Willison's January 2024 endorsement positioning Perplexity as the reference implementation for search-augmented LLMs. Subsequent products (OpenAI search, Gemini grounding) get benchmarked against this baseline.
“At $30 per 1k search queries, the OpenAI search API seems very expensive. Perplexity's Sonar model charges just $5 per thousand searches.”
tiniuclx · Hacker News
Top-voted pricing-comparison comment on the OpenAI agents launch HN thread. Captures the single most-cited reason developers picked Sonar over OpenAI's web search at parity quality.
“Perplexity.ai does search citations really well.”
nojvek · Hacker News
Endorsement on a thread about LLM search quality — nojvek mentions using Perplexity as a primary search engine for over six months, the same daily-driver pattern Willison reports.
The contrarian take
Not everyone treats the Sonar API as a 2025 launch story. The most cited pushback on the launch threads is from Havoc:
“They've had their sonar models on API since forever. Even the pricing looks same as always. … selection of models on their api used to be wider.”
Havoc · Hacker News
From the Sonar API launch HN thread.
Fair on the continuity point — Sonar wasn't a 2025 invention. But the cookbook is built on the API surface that exists today: the model split between sonar / sonar-pro / sonar-reasoning-pro / sonar-deep-research, the search_domain_filter and search_recency_filter primitives, and the citations[] array. That surface is now stable enough to build production pipelines on, even if the marketing names rotate.
One more alternative worth naming: there are several Perplexity MCP servers in the catalog — the official Perplexity, perplexity-mcp-server, perplexity-search, and perplexity-advanced. The trade-off is the usual skill-vs-MCP one: the skill is ~110 idle tokens, an MCP’s tool schemas load every turn. Pick the MCP only when multiple AI clients (Claude Code, Cursor, an internal agent) share one Perplexity billing project — otherwise stick with the skill in this cookbook. The skill also routes to Context7, Graphite, and Nx first when the query is library-shaped or workspace-shaped, which the bare MCP servers do not.
Real pipelines shipped on Sonar
Concrete examples from public reviews and the developer community. None of these used the Claude skill specifically — they’re here so you have a target shape in mind when you write the prompt.
- Simon Willison — daily-driver Perplexity user since 2024, the reference implementation he benchmarks every other AI-search tool against
- tiniuclx (HN) — explicit pricing comparison: $5/1K Sonar searches vs $30/1K OpenAI web-search API queries
- thenocodeguy (HN) — built a Deep Research agent on Sonar Pro APIs for under 30 cents per run
- mark_l_watson (HN) — combines Sonar with local Qwen / Ollama models for context-stuffing in coding workflows — multiple HN comments, search 'mark_l_watson perplexity'
- Perplexity API Cookbook — official examples: Fact Checker CLI, Daily Knowledge Bot, Disease Information App, Financial News Tracker, Academic Research Finder, Discord Bot
- bird0861 (HN) — Sonar API + Aider for fetching documentation and project planning
Gotchas (the four that bite)
Sourced from the launch HN thread, the Sonar pricing page, and verbatim community comments on citations and pagination.
There is no pagination on the search surface
As HN commenter n1xis10t flagged, Sonar's search API has no offset, cursor, or page=2. If you need broader coverage you re-prompt with a refinement (use case 10 chains three calls for this reason). Plan for a multi-call pattern, not a paginated loop.
Pricing is per-token AND per-search
Sonar bills both for tokens (input + output) and for the search request itself ($5–$14 per 1K queries depending on model and search context size). A 100-call cookbook run on sonar-pro is ~$1.50–$3 before any deep-research calls — set a budget alert before iterating.
Citations occasionally 404 or contradict the prose
On niche topics Sonar will sometimes synthesise a plausible-looking URL with a malformed slug. On contested claims it will return citations whose actual content disagrees with the [n] marker. Always programmatically check 200-status on citations[] and use sonar-reasoning-pro for adversarial queries.
search_domain_filter silently drops results
If your filter is too narrow Sonar returns an answer with citations: [] rather than expanding scope. Always check that citations[] is non-empty; if it is, retry without the filter and tag the answer LOW_AUTHORITY (use case 9 has the exact pattern).
Pairs well with
Curated to match the cookbook’s actual integrations: the research-and-analysis skills the recipes hand off to (web-search, literature-review, competitive-analysis, research, deep-research, summarize) plus the Perplexity, Tavily, Brave, and Exa MCP servers the multi-source recipes lean on for cross-checks.
Related skills
Related MCP servers
Two posts that compose well with this cookbook: What are Claude Code skills? covers the underlying mechanism, and Claude Code best practices covers the orchestration patterns the longer recipes (5, 6, 10) lean on.
Frequently asked questions
Is there a Perplexity MCP server I can use instead of the skill?
Yes — three of them. The mcp.directory catalog lists the official Perplexity server, perplexity-mcp-server, perplexity-search, and perplexity-advanced. The trade-off is the usual one: the perplexity skill costs ~110 idle tokens, while an MCP server's tool schemas load every turn. Reach for an MCP only when multiple AI clients (Claude Code, Cursor, an internal agent) need to share a billing project — otherwise the skill is the cheaper composition for solo work. The skill also routes to Context7 / Graphite / Nx first, which the MCP servers do not.
What is the difference between sonar, sonar-pro, sonar-reasoning-pro, and sonar-deep-research?
Four Sonar variants, four price-and-quality points. sonar is the fastest and cheapest at $1 per 1M tokens both directions (plus $5–$12 per 1K queries) — use it for daily news briefs and ticker lookups. sonar-pro is $3 in / $15 out per 1M (plus $6–$14 per 1K queries) — use it for the cookbook's research and verification recipes. sonar-reasoning-pro is $2 / $8 per 1M with a reasoning trace and reasoning_effort knob — use it for fact-checking and adversarial claims. sonar-deep-research is the same $2 / $8 but adds $5 per 1K searches and $3 per 1M reasoning tokens — use it once per long report, not in a loop.
How do citations work in the Sonar API response?
Every successful Sonar response has a top-level citations[] array of source URLs. The choices[0].message.content uses [1], [2], [3] markers that map by index to that array. So [1] in the prose is citations[0], and so on. The array is ordered by relevance, not by appearance — meaning citations[0] is the source the model leaned on most heavily, regardless of where it shows up in the markers. The newer search_results field gives you the same URLs plus snippet metadata, useful when you want to surface the source title alongside the URL.
Do I need a paid Perplexity account to run the skill?
Yes. Generate an API key at perplexity.ai/settings/api and add credit to the Sonar account. The skill expects PERPLEXITY_API_KEY in your shell. The free Perplexity consumer chatbot tier is unrelated — its credits do not transfer to the developer API. A typical 10-call cookbook run on sonar-pro is roughly $0.10–$0.30; a single sonar-deep-research call on a meaty topic can hit $2.
Can I filter Sonar to only quote .gov or .edu sources?
Yes — pass search_domain_filter as an array of domain patterns. Wildcards like *.gov, *.edu, and who.int work; prefix with `-` to exclude (`-reddit.com`). Use case 9 in this cookbook has the exact pattern. Sonar will silently fall back to fewer citations rather than violate the filter, so always check that citations[] is non-empty before trusting the answer; if it's empty, retry without the filter and tag the response LOW_AUTHORITY in your downstream.
Does Sonar's search API support pagination for more than the top results?
No. As HN commenter n1xis10t flagged, Sonar's search surface does not let you paginate — there is no offset, no cursor, no page=2. If you need broader coverage, you re-prompt with a refinement (e.g., 'beyond the top X, what else?') or split the query into narrower sub-queries and merge. This is the most-felt capability gap in the API and the reason the multi-step query chain in use case 10 is structured the way it is.
Why is 'perplexity api hallucinating citations' a real concern?
Two failure modes are worth naming. First, on niche topics with sparse coverage, Sonar will sometimes synthesise a plausible-looking URL that 404s when you click it — usually a malformed slug from a real domain. Second, on contested claims it will return citations that technically exist but disagree with the prose around them — the [n] marker says 'CONFIRMED' but the linked article says the opposite. Mitigations: always programmatically check that citations[] URLs return 200, and on adversarial queries promote to sonar-reasoning-pro with reasoning_effort='high' (use case 4).
Why is 'perplexity' as a bare term getting impressions but no clicks on this page?
If you typed just 'perplexity' you probably wanted the consumer chatbot at perplexity.ai — that page wins the bare-term query and always will. This page is for developers using Perplexity's Sonar API through a Claude skill. If that's what you want, the cookbook above is a 10-recipe tour. If you want the chatbot, close this tab and open perplexity.ai.
Sources
Primary
- davila7 perplexity SKILL.md (the skill manifest)
- Perplexity API reference: Chat Completions (Sonar)
- Sonar pricing (per-token and per-search)
- Perplexity blog: Introducing the Sonar Pro API
- Perplexity API Cookbook (official examples)
Community
- Simon Willison — Blog
- tiniuclx — Hacker News
- nojvek — Hacker News
- Zak — Hacker News
- Havoc — Hacker News
- n1xis10t — Hacker News
Critical and contrarian
Internal