DuckDuckGo MCP Server: Web Search from Claude (2026 Guide)
The DuckDuckGo MCP server gives an AI agent two things every assistant pretends to already have: a live web search, and the ability to actually read the pages it found. No API key, no signup, free under MIT. This guide covers what it does, how to install it in every common client, the two tool methods, DuckDuckGo’s bang-operator shortcuts, six real recipes, and the rate limits worth knowing before you wire it into an agent loop.

TL;DR + what you actually need
The four pieces of context you’ll keep coming back to:
- Canonical repo:
nickclyde/duckduckgo-mcp-serveron GitHub. MIT-licensed, Python, the implementation most install commands across the ecosystem point at. - Package:
duckduckgo-mcp-serveron PyPI. Install withuv pip install duckduckgo-mcp-serveror run it transient viauvx duckduckgo-mcp-server. - API key: none. DuckDuckGo’s HTML endpoint is keyless and the MCP server doesn’t add an auth layer on top.
- Tools: two —
searchandfetch_content. Search returns Markdown result snippets; fetch pulls the actual page body. They compose: search, pick a link, fetch.
The fast install paths, since most readers come here for the one-liner:
- Claude Code:
claude mcp add duckduckgo -- uvx duckduckgo-mcp-server - Cursor: paste a stdio block pointing at
uvx duckduckgo-mcp-serverinto~/.cursor/mcp.json— the install panel below emits the exact JSON. - OpenCode: add a
type: "local"entry pointing at the same command; snippet in the OpenCode section. - Claude Desktop: add to
claude_desktop_config.jsonundermcpServers— snippet below.
The rest of this guide explains how the two tools actually behave, when bang operators help, six recipes we run regularly, the rate-limit thresholds the server imposes client-side, and the failure modes worth catching in your agent loop.
What the DuckDuckGo MCP server actually does
A modern coding agent is impressive until you ask it anything time-sensitive. “What’s the current React 19 minor version?’’ “Is this CVE patched in Postgres 16.4?” “What’s the latest pricing for Anthropic tokens?” — those are questions the model can’t answer from its training data, and yet they’re the ones we ask constantly. The DuckDuckGo MCP server closes that gap with a two-tool surface: search finds the page, fetch_content reads it. The agent stops guessing.
A few traits make this server worth picking over Google-backed alternatives. The DuckDuckGo HTML endpoint is free and keyless — no Programmable Search Engine setup, no Custom Search API quota, no Bing key ceremony. The privacy posture matches: DuckDuckGo doesn’t personalise results based on your IP or a session cookie, so the agent gets a clean, reproducible response. That matters more than people give it credit for when an agent is supposed to produce the same answer twice in a row.
The MCP server itself is small and unopinionated. It’s a Python wrapper that calls the DuckDuckGo HTML endpoint, parses the result block into Markdown, and exposes a content-fetch helper that pulls the chosen URL through httpx by default (with an optional curl backend for sites that detect stripped-down HTTP clients). The whole thing fits in a couple hundred lines of code, runs as a stdio subprocess of your MCP client, and ships under MIT. That’s the whole architecture.
The canonical implementation lives at github.com/nickclyde/duckduckgo-mcp-server — maintained by Nick Clyde, distributed on PyPI as duckduckgo-mcp-server. If you stumbled across the autocomplete suggestion modelcontextprotocol/server-duckduckgo, that path doesn’t resolve to an official Anthropic-maintained repo — the community settled on the nickclyde build, and that’s the one this guide and our catalog page both point at.
One more piece of context worth understanding before you wire this up. Web-search MCP servers fall into two camps. The first camp wraps a paid search API — Brave Search, Tavily, Bing, Exa — and trades a per-query cost for better ranking, freshness guarantees, and clean JSON output. The second camp scrapes a public search front-end — DuckDuckGo’s HTML page, Google’s SERP — and gives you a keyless, free-forever search at the cost of slightly noisier results and an occasional parser break when the upstream HTML changes. The DuckDuckGo MCP server sits firmly in camp two, and that’s the right pick for the majority of solo-developer and small-team use cases. The moment you’re shipping an agent that handles a few thousand searches a day for paying customers, you graduate to a paid backend; until then, the keyless tier is the friction-free starting point.
How it works under the hood
Most MCP servers wrap an existing API. DuckDuckGo doesn’t have a clean JSON search API for the public — they have a paid Instant Answers feed and the HTML search results page that everyone screen-scrapes. The MCP server takes the second route: it issues an HTTP GET to html.duckduckgo.com, parses the result block out of the response, and returns Markdown-formatted entries with title, snippet, and URL.
For fetch_content, the server uses a second HTTP client to pull the chosen URL. The default backend is httpx; you can flip it to curl when sites refuse the Python-shaped User-Agent (Cloudflare and some news outlets do this). The output is the parsed page text with HTML stripped down to something the model can read without burning a thousand tokens on <svg> markup.
A few practical consequences of the screen-scrape-the-HTML approach:
- It can break. When DuckDuckGo reshuffles the result block markup, the parser needs an update. The repo gets a fix within a few days when this happens — version bumps land on PyPI regularly.
- It’s rate-limited. The server caps client-side at 30 searches and 20 content fetches per minute. If you go past that, it queues — your agent doesn’t get errors, it just slows down.
- It returns Markdown. Each search result is a Markdown list item with the page title as a heading, snippet as body, URL as a link. The model parses that natively without a structured-output schema.
The stdio transport means the server runs as a subprocess of your MCP client. Your client manages its lifecycle — start, restart, kill — so you never ssh in to manage a daemon. The flip side is that every client that uses DuckDuckGo runs its own copy of the process. Fine for single-developer use; less fine if you’re building a shared backend (in which case the server’s --transport sse and --transport streamable-http flags let you host it once and share the connection).
Install (every client)
One package, one prereq, one command shape. The prereq is Python 3.10+. The package is duckduckgo-mcp-server on PyPI. The command shape every modern client wires up is uvx duckduckgo-mcp-server — that runs the server transient via uv, downloading and caching the package on first run. If you prefer pip: pip install duckduckgo-mcp-server and then point your client at the duckduckgo-mcp-server entry-point directly.
For the standard clients — Cursor, Claude Code, Claude Desktop, VS Code, Codex CLI, Windsurf — the install panel below has the exact one-click button, copy command, or JSON snippet you need. Tap the row for your client, copy the snippet, and DuckDuckGo search is live in that client within seconds. The panel pulls its configs directly from our catalog so it stays in sync as the package evolves.
One-line install · DuckDuckGo
Open server pageInstall
The Claude Code one-liner is the most copy-pasted command in our GSC logs for this server:
claude mcp add duckduckgo -- uvx duckduckgo-mcp-serverAdd --scope project if you want it written to .mcp.json at the repo root so it travels with the project (every collaborator gets the same tool list). For Cursor, the JSON shape in ~/.cursor/mcp.json is:
{
"mcpServers": {
"duckduckgo": {
"command": "uvx",
"args": ["duckduckgo-mcp-server"]
}
}
}Restart Cursor, look at the MCP tool list in settings, and search / fetch_content appear.
Add DuckDuckGo to OpenCode
OpenCode (the open-source TUI agent) reads MCP servers from a JSON config — commonly ~/.config/opencode/config.json. Run opencode config print to confirm the path on your install. Add the local stdio shape:
{
"mcp": {
"duckduckgo": {
"type": "local",
"command": ["uvx", "duckduckgo-mcp-server"]
}
}
}Restart OpenCode (opencode in a new shell) and the two tools show up in the model’s tool list. If you’re running OpenCode in a sandbox without internet, the server fails fast — it needs outbound HTTPS to reach DuckDuckGo.
Add DuckDuckGo to Claude Desktop manually
Claude Desktop reads its MCP servers from ~/Library/Application Support/Claude/claude_desktop_config.json on macOS and the equivalent %APPDATA%\Claude\ path on Windows. Open that file (create it if it doesn’t exist) and add:
{
"mcpServers": {
"duckduckgo": {
"command": "uvx",
"args": ["duckduckgo-mcp-server"]
}
}
}Quit and reopen Claude Desktop fully — the MCP server list is loaded at startup. The tool count badge near the prompt should go up by two.
The reason we recommend uvx over a global pip install across every client config above: isolation. uvx runs the package in an ephemeral environment so a future Python upgrade doesn’t break the install, and a future duckduckgo-mcp-server version bump doesn’t require manual pip install --upgrade ceremony. The trade-off is the first-call cold start while uvx resolves and caches the package — usually under 10 seconds, after which subsequent invocations hit the cache. If your environment forbids package-on-demand fetching (locked-down corporate machines, air-gapped CI), pre-install the package globally and point command at the resolved binary path instead.
Browse every client and its config path at mcp.directory/clients.
The two tool methods, walked through
1. search
The everyday tool. Takes a query string and returns Markdown-formatted DuckDuckGo results. Typical agent input:
search({
query: "react 19 useFormState renamed",
max_results: 10
})Returns something like:
# Search Results
## React 19 — useFormState renamed to useActionState
- URL: https://react.dev/blog/2024/12/05/react-19
- Snippet: useFormState has been renamed to useActionState to better
reflect its role beyond forms...
## React 19 release notes
- URL: https://github.com/facebook/react/releases
- Snippet: ...the hook formerly known as useFormState is now
useActionState. Migration guide below...The parameters worth knowing:
query— required. Anything DuckDuckGo accepts, including bang operators (!gh react,!w einstein) and field syntax (site:github.com,filetype:pdf).max_results— optional, default 10. Set higher (up to ~30 returns useful) when the agent needs a wide net.region— optional, overrides the default region. Format is a DuckDuckGo region code likeus-enorde-de. Useful for locale-specific queries (pricing pages, regional news).
The output is intentionally compact. Each result is ~150 tokens, so ten results fit in roughly 1.5k tokens — cheap to feed into the model’s next reasoning step.
2. fetch_content
The companion tool. Takes a URL and returns the parsed page body. Typical input:
fetch_content({
url: "https://react.dev/blog/2024/12/05/react-19",
start_index: 0,
max_length: 8000
})Parameters:
url— required.start_index— optional, default 0. For long pages, callfetch_contentmultiple times with bumpingstart_indexvalues to paginate through the body.max_length— optional, default 8000 characters. Larger pages get truncated and the agent picks up the rest with another call.backend— optional override:httpx(default),curl(better against Cloudflare / bot-detection), orauto(the server picks).
The page comes back as plain text with HTML stripped down. Images become alt-text placeholders, code blocks are preserved, navigation chrome is removed. Good enough for the model to summarise, quote, or extract structured fields.
The natural pattern in agent traces is: search once for the query, pick the most relevant URL, fetch_content on that URL, then write the answer. Two tool calls, deterministic shape, no external schema to babysit.
Two anti-patterns we’ve seen worth flagging. First: agents that call search with a question rather than a search query — “what is the latest version of Pydantic v2?” instead of “pydantic v2 latest release”. DuckDuckGo’s ranking is keyword-driven; conversational phrasing returns weaker results. Teach the agent to shape queries like a human searching, not a human asking. Second: agents that fetch every URL from the results list before picking one. That’s wasteful in both rate-limit budget and token cost. The model usually has enough signal from the search snippets to pick the best URL on the first try — trust the snippet ranking and fetch once.
Bang-operator tutorial (the underrated bit)
DuckDuckGo’s killer feature isn’t the privacy line — it’s the bang-operator system. A bang is a prefix that redirects the search to a specific destination instead of the generic web index. !w einstein takes you to Wikipedia’s Einstein page; !gh react takes you to GitHub’s react results; !npm zod jumps to the Zod page on npm. DuckDuckGo maintains thousands of these (the directory is at duckduckgo.com/bangs) and they pass through the MCP search tool transparently.
Why does this matter for an agent? Because the agent doesn’t always need a list of ten results — it needs the canonical reference. !w pydantic returns the Wikipedia summary card first; that’s often the entire answer. !so react server actions drops the agent into Stack Overflow results filtered to the topic. Teaching the agent to use bangs where they apply cuts the result-noise budget dramatically — fewer tokens spent on irrelevant page snippets, fewer follow-up fetch_content calls, faster end-to-end answers.
A small but important behaviour worth knowing: bangs redirect on the DuckDuckGo side, so what comes back through search is the destination site’s result-equivalent (e.g. GitHub’s top hits for !gh react), not DuckDuckGo’s generic web index. That means the result shapes differ between bangs — GitHub gives you repo names and short descriptions; Wikipedia gives you article titles and intro paragraphs; Reddit gives you thread titles. Train the agent’s prompt to expect different shapes per bang, or just let the model figure it out — modern models handle this well.
A short list of bangs that earn their keep inside agent workflows:
| Bang | Goes to | When to teach the agent |
|---|---|---|
!w | Wikipedia | Quick canonical summary of a concept, person, or event |
!gh | GitHub search | Find a repo by name or topic |
!so | Stack Overflow | Debug or pattern lookup |
!npm | npm | Find a package, check version |
!mdn | MDN Web Docs | Canonical web-API or CSS reference |
!pypi | PyPI | Python package lookup |
!hn | Hacker News | Recent industry discussion / launch threads |
!yt | YouTube | Tutorials, conference talks — pair with fetch_content on the transcript page |
!arxiv | arXiv | Find a paper by title or topic |
!reddit | Real-user opinions on tools and libraries |
Beyond bangs, DuckDuckGo respects the standard field-modifier syntax most search engines share:
site:docs.python.org asyncio— restrict to one domainfiletype:pdf neural network— only PDF results"exact phrase"— quoted phrase matchcats -dogs— exclude a termreact OR vue— boolean OR
Encode these into the agent’s system prompt or a tool-use skill. The cleaner the query, the fewer rounds of search + fetch_content the agent needs to land on the right answer — which matters both for rate limits and for response latency.
Recipes
Six workflows where the DuckDuckGo MCP server earns its keep. We assume the server is registered in your client of choice at user scope.
Recipe 1 — Verify a fact mid-coding
You’re writing code that calls an API and you’re not sure if a parameter is still supported. Prompt: “Search DuckDuckGo for the current OpenAI Responses API parameters, then fetch the docs page and tell me whether response_format still works.” The agent calls search with site:platform.openai.com responses api response_format, picks the canonical doc URL, calls fetch_content on it, and answers from the current page rather than its 2024 memory. The key phrase to teach the agent is “cite the URL you fetched.” That single nudge eliminates the pattern where the model reads the docs page and then, somehow, still hallucinates a parameter name two sentences later. Once the agent has to point at a URL, it tends to actually quote from the URL.
Recipe 2 — Pull a quote from a blog post
You remember Anthropic blogged about something specific but you can’t find the post. Prompt: “Find the Anthropic blog post on contextual retrieval and quote the bit where they describe the hybrid search results.” The agent issues !hn anthropic contextual retrieval or site:anthropic.com contextual retrieval, fetches the post, and quotes the section. Faster than you scrolling through Anthropic’s archive.
Recipe 3 — Find a GitHub repo by description
You half-remember a tool: “there was that Rust tree-sitter wrapper that the Astral folks used in uv, what was it called?” Prompt: “Use !gh tree-sitter rust wrapper and find the repo that’s referenced by uv’s build code.” The agent narrows it down faster than you can browse GitHub. Combine with a follow-up fetch_content on the README for a summary.
Recipe 4 — Compare two libraries by current opinion
You’re picking between Tanstack Query and SWR and you don’t want a recycled 2023 comparison. Prompt: “Search DuckDuckGo for recent (2026) Reddit and Hacker News threads comparing Tanstack Query and SWR, fetch the top three, and summarise the actual trade-offs people raise.” The agent uses !reddit and !hn bangs, pulls the most upvoted comments via fetch_content, and produces a real summary instead of a generic one. We use this weekly when evaluating tools.
Recipe 5 — Pricing or status check
You’re scoping a project budget and you need current Anthropic pricing. Prompt: “Fetch the latest Anthropic pricing page and tell me the input-token rate for Claude Sonnet 4.5.” The agent searches for the pricing URL, fetches it, returns the number. Same pattern for service-status pages, release notes, and any URL where the answer changes faster than the model’s training data. Worth noting: pricing pages frequently render client-side via JavaScript, in which case fetch_content returns a shell of an HTML page with no actual prices. The fallback is to fetch a related-but-static doc — “Anthropic API rate-card” usually has a markdown rendition on a blog post — or to bump to a browser-backed search MCP for the page you care about. We’ll cover that escape hatch in the “when to switch” section.
Recipe 6 — Skill that always uses DuckDuckGo for verification
Bake DuckDuckGo into a Claude Code skill for tasks that should always be grounded against the live web:
---
name: live-web-verify
description: |
Before answering any question about pricing, current versions,
or recent events, search DuckDuckGo for the canonical source
and fetch the relevant page. Cite the URL in the response.
---
1. Call search with a query that includes site: when possible.
2. Pick the most authoritative result.
3. Call fetch_content on that URL.
4. Quote or paraphrase from the fetched page, citing the URL.
5. If the search returns nothing relevant, say so. Do not
guess.Auto-activates on any prompt about prices, dates, or current versions. We’ve watched it cut the model’s “confidently wrong” rate on time-sensitive questions to near zero.
Rate limits + reliability
The server enforces two client-side rate limits:
- 30 search calls per minute. Past that, the next call waits in an internal queue rather than failing. The model doesn’t see an error; it sees a slower response.
- 20 content fetches per minute. Same queue behaviour.
These are intentional. DuckDuckGo’s upstream endpoint will throttle a single IP that hammers it with a thousand searches a minute, and the upstream response is “empty result set” rather than a clean 429 — which an agent can’t recover from gracefully. The client-side cap keeps your IP under the upstream’s radar.
A few signals to watch for when reliability slips:
- Empty result sets where you used to get results: almost always upstream throttling. Wait a few minutes, or move your agent to a different IP. Production deployments behind a rotating proxy don’t hit this.
- Slow first-call latency: the cold start on
uvxdownloads the package on first run. Pre-install withuv pip install duckduckgo-mcp-serverto remove the cold-start. fetch_contentreturning empty pages: the target site is bot-detecting the defaulthttpxUser-Agent. Passbackend: "curl"to the tool call, or install the package with the[browser]extra (uv pip install "duckduckgo-mcp-server[browser]") for the curl fallback baked in.
For shared backends where multiple agents hit the same server, run it under --transport streamable-http and put a proxy in front. The 30/20 caps still apply per process, so scale horizontally if you need higher throughput. For most single-developer use, the limits never bite.
One last reliability note worth internalising: the DuckDuckGo MCP server has no built-in retry on transient upstream failures. If a search returns an empty result and the agent treats that as “no results found,” you can end up with the model confidently telling the user a topic doesn’t exist when in reality the upstream just hiccupped. The fix is at the agent layer: instruct it to retry once with a slightly reworded query before concluding that no results exist. The retry costs almost nothing in token budget and saves the false-negative-confidence failure mode that’s hardest to debug from a chat transcript.
Troubleshooting
Server doesn’t appear in the client’s tool list
Two common causes. First: the client didn’t restart fully after the config edit — quit it completely (not just close the window) and reopen. Second: uvx isn’t on the client’s PATH, so the subprocess fails silently. Check the client’s MCP log (Claude Code: claude mcp list; Cursor: the MCP panel in settings) for a startup error. Install uv globally via curl -LsSf https://astral.sh/uv/install.sh | sh if needed.
Empty search results
Upstream DuckDuckGo is rate-limiting your IP, or the HTML markup changed and the parser needs an update. Wait a minute, retry; if it persists, upgrade the package (uv pip install --upgrade duckduckgo-mcp-server) and check the repo issues for a parser fix.
fetch_content returns garbled or empty bodies
The target site is rejecting the default httpx client (Cloudflare and several news outlets). Pass backend: "curl" to the tool call, or install the [browser] extra so curl is available. Some sites block everything that isn’t a full browser — at that point you need a different fetch strategy (Playwright-backed, etc.).
Latency spikes on a busy agent
You’re hitting the client-side rate-limit queue (30/min search, 20/min fetch). The server is waiting it out instead of failing. Either space your tool calls in the agent prompt or scale horizontally with multiple server processes behind a load balancer if you control the deployment.
Python version mismatch
The package requires Python 3.10+. uvx picks a compatible interpreter automatically, but if you’re using a system pip on an older Python you’ll see install failures. Easiest fix: install uv and use the uvx invocation pattern instead.
When to switch to something else
DuckDuckGo MCP is the free-tier default — no key, no signup, results in seconds. It’s not the right choice for every web-search job.
- You need higher-quality results or freshness: Tavily, Brave Search, or Exa ship MCP servers backed by paid search APIs with better ranking on recent technical content. The trade-off is the API key and the per-query cost.
- You need structured search with filters: Brave’s MCP server lets the agent filter by date, country, and result type with more granularity than DuckDuckGo’s query-string syntax exposes.
- You’re building a serious research agent: Exa’s neural-search semantics return better hits for “find me papers about X” than keyword-matched DuckDuckGo. Cost scales with usage, but the result quality is noticeably better.
- You need to scrape pages a browser would see: Firecrawl or Playwright-backed MCP servers handle JS-rendered pages where
fetch_contentjust gets an empty shell.
We compared the field — DuckDuckGo against Tavily, Brave, Exa, Firecrawl, and Linkup — in our Best Web Search MCP Servers (2026) deep dive. Read that if you’re still picking, or if your agent has outgrown the keyless tier and you need to step up to a paid backend.
A useful mental model for the switch decision: stay on DuckDuckGo MCP until you find a query class where it consistently disappoints. For most developer workflows — documentation lookups, repo discovery, quote retrieval, current-version checks — it’s more than enough, and the keyless tier removes the friction of provisioning an API key for every new experiment. The moment you have a use case that depends on a specific result-quality bar (legal research, scientific literature, regulated-industry fact-checking), trade up. Don’t over-engineer early.
FAQ
What is the DuckDuckGo MCP server?
It's a small Model Context Protocol server that wraps DuckDuckGo's public HTML search endpoint so an AI agent (Claude Code, Cursor, OpenCode, Claude Desktop, etc.) can run live web searches and fetch page content. The canonical implementation is `nickclyde/duckduckgo-mcp-server` — Python, MIT-licensed, distributed on PyPI. It exposes two tools (`search` and `fetch_content`) and needs no API key.
Is there an official DuckDuckGo MCP server?
DuckDuckGo Inc. has not shipped a first-party MCP server. The community implementation that most install instructions point at is `nickclyde/duckduckgo-mcp-server` on GitHub. The `modelcontextprotocol/server-duckduckgo` query you may have seen in autocompletes returns no canonical repo under that exact path — the community settled on the nickclyde build. Treat that as the de-facto reference until DuckDuckGo themselves ship one.
Does the DuckDuckGo MCP server need an API key?
No. DuckDuckGo's HTML search endpoint is keyless, and the MCP server inherits that. You install the package, point your client at it, and it works. The only ceilings you hit are the rate limits baked into the server (30 search requests per minute, 20 content fetches per minute) and DuckDuckGo's own bot-detection if you spam queries from one IP.
How do I install the DuckDuckGo MCP server in Claude Code?
Install the Python package once with `uv pip install duckduckgo-mcp-server` (or pip), then register it with `claude mcp add duckduckgo -- uvx duckduckgo-mcp-server`. The install panel on this page emits the exact command for your client — Claude Code, Cursor, Claude Desktop, OpenCode, VS Code — pulling configs from our catalog so they stay current as the package evolves.
What tools does the DuckDuckGo MCP server expose?
Two. `search` takes a query string and an optional `max_results` (default 10) plus an optional `region` override; it returns DuckDuckGo result snippets as Markdown. `fetch_content` takes a URL plus `start_index`, `max_length`, and an optional `backend` (`httpx`, `curl`, or `auto`) and returns the parsed page body so the agent can read the article you just searched up. The two compose naturally: search, pick a link, fetch.
What are the DuckDuckGo MCP server rate limits?
The nickclyde implementation rate-limits client-side: 30 search calls per minute and 20 content fetches per minute, with automatic queue management so requests over the ceiling wait rather than fail. DuckDuckGo's upstream endpoint may also throttle aggressive traffic from a single IP independently — if you see empty result sets where you used to get results, that's the upstream defending itself, not the MCP layer.
Can I use DuckDuckGo bang operators (!w, !gh) from inside an agent?
Yes — bangs are just part of the query string. If you pass `!w einstein` to the `search` tool, DuckDuckGo treats it as a Wikipedia redirect; you get the Wikipedia result first. The same applies to `!gh react`, `site:github.com mcp servers`, and so on. The bang-operator section below covers the operators worth teaching the agent explicitly.
Is the DuckDuckGo MCP server free?
Yes, completely. The server is MIT-licensed, the PyPI package is free, the DuckDuckGo endpoint requires no key, and there is no paid tier on the MCP layer. The only cost is whatever your AI client charges for tokens to summarise the results — and even there, the Markdown output is short and cheap to feed back through the model.
Does the DuckDuckGo MCP server work with OpenCode?
Yes. OpenCode reads MCP servers from its config (commonly `~/.config/opencode/config.json`). Add a `mcp.duckduckgo` block of `type: "local"` with command `uvx -y duckduckgo-mcp-server` — the snippet is in the install section below. Restart OpenCode and the two tools show up in the model's tool list.
Where is the DuckDuckGo MCP server's official documentation?
The canonical README lives at `github.com/nickclyde/duckduckgo-mcp-server`. The PyPI page at `pypi.org/project/duckduckgo-mcp-server/` carries the install commands and version history. This guide consolidates both plus install snippets for every common client, the bang-operator reference, recipes, and the rate-limit gotchas worth knowing before you ship an agent that relies on web search.
Sources
- Canonical repository & README: github.com/nickclyde/duckduckgo-mcp-server (MIT, maintained by Nick Clyde)
- PyPI package page: pypi.org/project/duckduckgo-mcp-server
- DuckDuckGo bang-operator directory: duckduckgo.com/bangs
- mcp.directory catalog entry: /servers/duckduckgo
- uv (recommended runner): docs.astral.sh/uv
Comparison
Best Web Search MCP Servers (2026)
ReadComplete Guide
Context7 MCP Server: Complete Setup Guide (2026)
ReadCatalog
DuckDuckGo on mcp.directory
ReadFound an issue?
If something in this guide is out of date — a new install pattern, a renamed tool, a DuckDuckGo MCP feature we missed — email [email protected] or read more in our about page. We keep these guides current.