DeepWiki MCP: Ask GitHub Repos Questions from AI (2026)
DeepWiki is a free MCP server from Cognition AI that lets your coding agent ask questions about any public GitHub repository — “where is the auth flow,” “how do these two modules talk,” “what changed in the last refactor” — and get answers grounded in the repo’s code and README. This guide covers the endpoint, the three tools, the install path for every modern MCP client, the recipes we actually use, and the gotchas worth knowing before you trust the answers.

Video walkthrough — DeepWiki in practice (3 min)
Source: Jonathan Bossenger on YouTube — “DeepWiki | AI powered documentation you can talk to”
TL;DR + what you actually need
Three constants you’ll keep typing if you use DeepWiki regularly:
- Remote endpoint:
https://mcp.deepwiki.com/mcp— Streamable HTTP, no auth, public repos. Use this. - NPM package: none for the official service. DeepWiki is hosted by Cognition; there is no stdio package to install locally. (A community wrapper called
regenrek/deepwiki-mcpexists but its own README says it’s no longer working.) - Cost: free. No login, no API key, no rate-limit chasing for normal interactive use.
The fast install paths, in case you came for the one-liner:
- Claude Code:
claude mcp add -s user -t http deepwiki https://mcp.deepwiki.com/mcp - Cursor / Windsurf: add a
mcpServers.deepwiki.serverUrlentry pointing at the URL above — snippet below - OpenCode / Codex CLI: remote MCP entries — snippets below
The rest of this guide explains how DeepWiki indexes repos, walks through the three tools, lists six recipes where DeepWiki has paid us back the install effort tenfold, and ends with limitations you should know before you trust an answer.
What DeepWiki actually does
DeepWiki is Cognition’s public docs index for GitHub. Replace github.com in any public repo URL with deepwiki.com and you land on an auto-generated wiki: architecture diagrams, module summaries, prose explanations of how the pieces fit together. Cognition launched it on May 5, 2025 as “the free public version of Devin Wiki and Devin Search” — the same indexing technology Devin uses to onboard itself to a new codebase, exposed as a free site for everyone. The launch announcement noted they had already indexed “over 50,000 of the top public GitHub repos”; the index has grown since.
On May 22, 2025, Cognition shipped the MCP server. The pitch is brutally direct: instead of pointing a teammate at a wiki page and hoping they read the right section, you let an agent query the wiki on demand — pulling structure, contents, or natural-language answers straight into the conversation where the developer is actually working. The endpoint runs at mcp.deepwiki.com/mcp. Verbatim from the Devin docs: “a free, remote, no-authentication-required service” for public repositories.
That last word matters. DeepWiki’s public MCP indexes public GitHub repos. If you want the same indexing model over a private repo, Cognition offers Devin (the paid product) and a separate Devin MCP server that authenticates against your account. Most of this guide assumes the public path because that’s the version most readers will use first; we note where the private path diverges.
The clearest mental model: Context7 gives your agent up-to-date library docs (React 19 API, Pydantic v2 reference); DeepWiki gives your agent the ability to ask any open-source project on GitHub a question directly. They’re sibling tools, not substitutes — most agent setups we run have both registered.
The other useful comparison is to the official GitHub MCP server. GitHub’s MCP is the operational interface — read files at a commit, open PRs, file issues, query the GraphQL API. DeepWiki is the comprehension interface — understand what the repo does at a high level. The two compose: DeepWiki tells the agent where the auth flow lives and how it works conceptually, the GitHub MCP lets it actually read the exact file at the exact commit and (eventually) write a PR. We’ll come back to this in the “when to switch” section, but the short version is: register both, they’re complementary.
How GitHub-repo-as-RAG works
Most docs-RAG tools point an agent at a vector store full of chunked documentation snippets and call it a day. DeepWiki’s model is a step more structured. It first builds a wiki per repo — a hierarchical, prose-summarised view of the codebase with architecture, key modules, public APIs, and cross-references — then exposes that wiki through three tools the agent can call in sequence.
The wiki itself is the part that took Cognition real work to build. On a fresh public repo, the pipeline walks the directory tree, identifies entry points and top-level packages, summarises each module’s purpose from the source plus the README, and stitches those summaries into a navigable structure. You can see the result yourself by visiting deepwiki.com/<owner>/<repo> for any indexed project — that web view is the same content the MCP tools query under the hood.
Three things follow from that design that matter when you’re using DeepWiki via an agent:
- Cold-start latency. If you ask about a repo DeepWiki hasn’t indexed yet — usually a brand-new or very small project — the first query triggers indexing in the background. The MCP call may return a sparse answer that’s noticeably better five minutes later once the wiki finishes building. The big-name repos are pre-indexed and hot.
- Structure-first beats blind asking. The wiki is hierarchical, and the cheapest answers come from calling
read_wiki_structurefirst to pick the right section, then asking a focused question. Agents that jump straight toask_questionon a complex monorepo burn more tokens and get fuzzier answers. - It’s read-only and async to the source. DeepWiki indexes the GitHub mirror of a public repo, but it doesn’t hold your working tree. Code you’ve written locally and haven’t pushed is invisible to it. Same for branches that aren’t merged: DeepWiki sees what’s indexed, which for most repos means the default branch and may lag the very latest commits by minutes or hours.
The trade-off DeepWiki makes is interesting in its own right. A pure RAG-over-source approach would let the agent ask any question down to the line level, but it would also give the agent thousands of tiny snippets that don’t connect. DeepWiki’s wiki-first design surfaces the high-level shape of the codebase first, then drills down — closer to how a human reads a new repo, starting with README and directory layout before diving into a specific file. That structure is what makes it good at orientation questions and slightly worse at micro-questions about a single line of code.
Worth saying out loud: the wiki is generated by an LLM. Cognition runs the index at scale, but the summaries themselves can be wrong on the edges (more on this in Limitations). The right way to use DeepWiki is as a fast navigation layer that points your agent at the part of the code that matters, with the source file as the final source of truth.
One detail that surprises people: the question-answering isn’t just retrieval from the pre-built wiki. When the agent calls ask_question, DeepWiki appears to combine the wiki summaries with live retrieval against the indexed source — so answers cite file paths even for things the static wiki wouldn’t have summarised on its own. That’s why ask_question is the workhorse tool and why the structure and contents tools mostly exist to give the agent context for asking better questions. If you watch an agent’s trace, the pattern that performs best is “one structure call to orient, one ask call to answer” — not three or four small queries.
Install (every client)
DeepWiki only ships as a remote MCP server. There’s one install pattern with three transport flavours: every modern client supports the Streamable HTTP transport directly, older clients (or strict corporate proxies that block it) may need a fallback. The official Devin docs give the canonical snippets — what follows mirrors them with the catalog’s install panel for copy-paste in any client.
One-line install · DeepWiki
Open server pageInstall
Add DeepWiki to Claude Code
The Claude Code one-liner is the most-copied DeepWiki command on the open web. Verbatim from Cognition’s own docs:
claude mcp add -s user -t http deepwiki https://mcp.deepwiki.com/mcpBreakdown: -s user writes the registration to your user-scope MCP config so DeepWiki is available in every project you open. Swap to -s project if you only want this repo to see DeepWiki — handy when you’re using a config that’ll get committed to .mcp.json and you don’t want Cognition’s service called from unrelated work. The -t http flag selects the Streamable HTTP transport, which is the only one the public DeepWiki endpoint speaks. Run claude mcp list after to confirm registration.
OpenCode and Codex CLI
OpenCode (the open-source TUI agent) reads MCP servers from its user config file — commonly ~/.config/opencode/config.json. Run opencode config print to confirm the path on your install. The remote shape:
{
"mcp": {
"deepwiki": {
"type": "remote",
"url": "https://mcp.deepwiki.com/mcp"
}
}
}Restart OpenCode and DeepWiki shows up in the tool catalogue. If you’re still weighing OpenCode against Claude Code, Aider, Cline, or Goose, our CLI agent comparison walks through the trade-offs.
Codex CLI uses a TOML config at ~/.codex/config.toml. The current Codex MCP transport supports remote Streamable HTTP via a URL entry — the install panel above emits the exact block for your installed Codex version. The pattern is a [mcp_servers.deepwiki] block with a url field pointing at https://mcp.deepwiki.com/mcp.
For Cursor and Windsurf, the JSON shape from the Devin docs (paste into the MCP settings):
{
"mcpServers": {
"deepwiki": {
"serverUrl": "https://mcp.deepwiki.com/mcp"
}
}
}Once registered, both Cursor and Claude Code will list the three DeepWiki tools in their respective MCP tool panels. Tool registration is the most common failure mode — see Troubleshooting below if DeepWiki appears installed but the agent never calls it.
Browse every client and its config path at mcp.directory/clients.
The three tools, walked through
1. read_wiki_structure
Returns the section-level outline of a repo’s auto-generated wiki. The agent passes a GitHub repo in the form owner/repo; the tool returns the list of wiki sections (Overview, Architecture, Modules, APIs, etc.) so the agent can pick where to dig in next. Cheap call, low token cost, almost always the right first step on a repo the agent doesn’t already know.
read_wiki_structure({
repoName: "modelcontextprotocol/python-sdk"
})2. read_wiki_contents
Returns the actual documentation body — the prose content of a section identified by the structure call, or the wiki landing page for an overview. Use this when the agent needs the long-form description of a module before writing code that touches it.
read_wiki_contents({
repoName: "modelcontextprotocol/python-sdk"
})The response is markdown-shaped prose describing the repo’s layout and key abstractions. On big repos this can be lengthy — pair it with a specific topic in the agent’s next move (e.g. ask the agent to summarise just the transport layer once it has the full wiki).
3. ask_question
The headline tool. Takes a natural-language question and returns a grounded answer pulled from the repo’s indexed code, README, and wiki. Ideal shape: question references a concrete artefact in the repo (a class, a function, a config file), not just vague intent.
ask_question({
repoName: "modelcontextprotocol/python-sdk",
question: "Where is the StreamableHTTP transport implemented and what class names should I look at?"
})The response cites file paths and quotes the relevant code. We’ve had the best results when the question is specific enough that there’s obviously a right answer in the source — vague questions get vague replies. If a question spans multiple repos, split into multiple calls; DeepWiki indexes one repo at a time.
Recipes
Six workflows where DeepWiki earns the registration. We assume DeepWiki is installed at user scope in your client of choice — Claude Code, Cursor, Windsurf, or OpenCode all work the same way once registered.
Recipe 1 — Onboard to a new repo before contributing
You picked up an OSS issue on a project you’ve never touched. Maybe you’re doing Hacktoberfest, maybe a colleague tagged you on a downstream bug, maybe you just want to fix something annoying in a dependency. Before you start grepping blindly, prompt the agent:
Use DeepWiki to read the structure of langchain-ai/langchain,
then ask: "What's the high-level architecture? Where does the
core abstractions code live and how do agents plug in?"Two minutes, a clean architectural overview, the file paths you need to read first. We use this on every new OSS contribution — it cuts the “where do I even start” phase to nearly zero. The same prompt works for any unfamiliar service or library you’re evaluating before adopting: instead of reading three blog posts and skimming the README, ask DeepWiki for the architecture and key abstractions in two calls.
Recipe 2 — Debug an issue by querying the repo’s own docs
A library is throwing an obscure error you can’t find on Google. Instead of reading source for an hour:
Use DeepWiki on pydantic/pydantic. Ask: "What does the error
'ValidationError: 1 validation error for X — Input should be
a valid dictionary or instance of X' usually indicate, and
which validator code path raises it?" Cite the file paths.The agent calls ask_question directly, gets a code-cited answer pointing at the validation module, and you’re reading the right file inside a minute. Beats grep-by-error-string for any repo with non-trivial structure.
Recipe 3 — Verify a claim before trusting it
Another model just told you that some framework supports a feature you’re not sure exists. Rather than trust it blindly:
Use DeepWiki on tiangolo/fastapi. Ask: "Does FastAPI
natively support WebSocket dependencies in the same way it
supports HTTP route dependencies? Cite the source files."If the answer cites concrete files, you have a grounded reference. If it dodges, that’s usually a signal the feature is partial or non-existent — file-citation pressure is a decent hallucination check.
Recipe 4 — Plan a PR before opening the IDE
You want to add a feature to an OSS repo. Before touching files:
Use DeepWiki on <repo>. Read structure, then ask: "If I
wanted to add support for X, which modules would I need to
change and what's the convention for adding tests?" Then
draft a plan with bullet points and file paths.You end up with a 5-bullet plan referencing the right files. Loop the plan back through Claude Code or Cursor for the actual edits.
Recipe 5 — Extract a single feature from a big repo
Reportedly the workflow Andrej Karpathy used: point DeepWiki at a large research codebase, ask it to identify the single module that implements the feature you want, then have the agent reimplement it as a standalone file in your project. The wiki-structure-first approach is what makes this tractable on repos too big to read end-to-end. The prompt shape that works: “Use DeepWiki on <big-repo>. Identify the module that implements <feature>. List its public functions, its dependencies, and any internal helpers it pulls from elsewhere in the repo. Then propose a single-file standalone implementation I could drop into a new project.” You get a focused answer because DeepWiki has the whole repo structure available, not just the file the agent happened to open.
Recipe 6 — Skill / agent prompt that always uses DeepWiki
For agents that mostly work in unfamiliar OSS repos, bake DeepWiki into a Claude Code skill:
---
name: oss-onboarder
description: |
When the user asks about an unfamiliar GitHub repo, always
call DeepWiki: first read_wiki_structure, then ask_question
with a focused architectural question. Cite file paths in
every answer.
---
Step 1: call read_wiki_structure({ repoName: <repo> }).
Step 2: pick the most relevant section.
Step 3: call ask_question with the user's specific question
and cite the source files in your answer.The skill auto-activates on repo-exploration prompts; the DeepWiki calls happen without you typing the tool name; the agent stays honest about citations. Pair this with Context7 for the underlying library docs and you have a near-complete “learn-this-codebase” stack.
Limitations
DeepWiki is genuinely useful, but it’s an auto-generated wiki built by an LLM over public source — three things follow from that.
- It can confidently hallucinate on niche or complex codebases. On the launch HN thread user
buovjagareported DeepWiki listing Buck as LibreOffice’s primary build system, which is wrong;jcranmerdocumented similar inaccuracies on the LLVM page. Build systems, low-level history, and obscure subsystems are the categories where we’ve seen the most drift. Use DeepWiki to navigate; verify the answer against the actual source for anything load-bearing. - Coverage is good but not universal. Cognition seeded the index with the top 50,000 public repos and continues adding more, but new or small projects may not be indexed yet. The first query against an un-indexed repo triggers indexing in the background — your initial answer may be sparse and the same question minutes later will be better.
- The wiki lags the source. DeepWiki re-indexes on a schedule rather than instantly, so a repo’s very latest commits may not be reflected in the wiki yet. For most public OSS this is a non-issue; for a repo in rapid daily flux it’s worth knowing.
- Public repos only. The free endpoint indexes public GitHub. Private-repo coverage requires Devin and the separate Devin MCP.
- No write access. DeepWiki is read-only Q&A. To actually open PRs or file issues, register the official GitHub MCP server in parallel.
Pricing & free tier
There’s effectively no pricing page for DeepWiki MCP because the service is free. The Devin docs page describes it as “a free, remote, no-authentication-required service” — no signup, no credit card, no API key, no published rate limits for normal interactive use. Cognition’s revenue comes from Devin (the paid autonomous agent product); DeepWiki MCP is the developer-relations wrapper around the same indexing technology, given away to everyone.
Two scenarios where you do pay Cognition something:
- Private-repo indexing. Public endpoint covers public repos only. For private repos, you sign up for Devin, connect the repo to your Devin workspace, and use the separate Devin MCP server which authenticates against your account. Devin is a paid product on a usage-based plan; see cognition.ai for current pricing.
- Heavy automated workloads. Cognition hasn’t published a hard rate limit for the public endpoint, but it’s a free shared service — if you’re hammering it from a long-running automation, expect throttling and consider whether private indexing or a self-hosted alternative is the right answer.
For everyday agent-in-the-IDE use, none of this matters: register the endpoint, never think about pricing again.
Troubleshooting
Agent doesn’t call DeepWiki even when you mention the repo
Registration issue 9 times out of 10. Run claude mcp list (or open the Cursor/Windsurf MCP settings panel) and confirm DeepWiki shows up with a healthy status. If it’s registered but inert, restart the client to flush the tool catalogue. The agent will not auto-call DeepWiki without seeing it in the tool list, so the cause is usually transport (HTTP flag missing) or scope (registered to a project you’re not in).
“Repo not found” or empty wiki structure
Either the repo isn’t indexed yet (new or very small projects), or it’s private and invisible to the public endpoint. Visit deepwiki.com/<owner>/<repo> in your browser — if the page exists, the MCP can see it; if it doesn’t, you’ll need to wait for indexing or use the Devin MCP path for private repos.
Connection error / Streamable HTTP transport rejected
Some older client builds don’t speak Streamable HTTP cleanly. Symptoms: connection timeouts, empty tool lists, or 4xx errors in the client’s MCP log. Upgrade the client to the current version — every modern release (Claude Code, Cursor, Windsurf, OpenCode, Codex CLI) supports it. There’s no stdio fallback for the official DeepWiki endpoint, so the upgrade is the only fix.
Corporate proxy / firewall blocking mcp.deepwiki.com
If your work network blocks outbound HTTPS to third-party domains, DeepWiki won’t reach you. Talk to your IT team about allow-listing the domain, or use a personal machine for OSS exploration. There’s no on-prem version of the public DeepWiki MCP.
Answer cites a file that doesn’t exist in the repo
Classic hallucination signal. Cross-check against the actual repo on GitHub. The wiki summarises code with an LLM, so very rare false-positive citations slip through. Treat file paths from ask_question as “probable, go verify” rather than “guaranteed correct” — especially on monorepos with many similar-looking paths.
When to switch to something else
DeepWiki is excellent for one specific job: asking questions about an unfamiliar public GitHub repo. It’s not the right tool for every retrieval workload.
- You need up-to-date library API docs (React 19, Pydantic v2, Next.js 15 App Router): use Context7 instead — its complete guide walks through the two-tool flow. Context7 indexes version-specific framework docs; DeepWiki indexes whole repos.
- You need to actually open PRs, file issues, or read files at a specific commit: register GitHub’s official MCP server alongside DeepWiki. DeepWiki teaches the agent about the repo; the GitHub MCP gives it write access.
- You need offline / IDE-tight docs without an external service: Ref is the editor-side alternative — useful when corporate policy or air-gapped work rules out a hosted endpoint.
- You want private-repo coverage of your own company’s code: sign up for Devin and use the separate Devin MCP server. The public DeepWiki endpoint will never index your private code, by design.
For a side-by-side comparison of DeepWiki against Context7, Ref, and Docfork (the four big docs/repo retrieval MCPs in active use), read our Context7 vs DeepWiki vs Ref vs Docfork deep dive — that’s the right starting point if you’re still picking. Most teams we’ve seen end up running Context7 + DeepWiki together: library docs from one, repo orientation from the other.
A quick stack-shape heuristic if you’re wiring up an agent from scratch: register Context7 for any question that names a framework, DeepWiki for any question that names a repo, the GitHub MCP for any operation that writes (PRs, issues, commits). All three free for public-OSS work. The combination is what turns a generic coding agent into one that can meaningfully contribute to a new OSS project on day one — read the wiki, understand the architecture, find the right file, propose the diff, open the PR. Each MCP does one part of that loop well.
FAQ
What is the DeepWiki MCP server URL?
The official DeepWiki MCP endpoint is `https://mcp.deepwiki.com/mcp`. It speaks the Streamable HTTP transport — modern MCP clients (Claude Code, Cursor, Windsurf, Codex CLI, OpenCode) connect to it directly with no local install, no signup, and no API key. There is no NPM package for the official server — DeepWiki is a hosted service by Cognition AI.
Is DeepWiki MCP free?
Yes. The Cognition docs describe it verbatim as `a free, remote, no-authentication-required service`. No login, no API key, no credit card. You point your client at the endpoint and start asking questions. The trade-off is that the free tier covers public GitHub repos only — for private-repo coverage you need a Devin account and the separate Devin MCP server.
What is the difference between DeepWiki and Context7?
Context7 indexes library documentation per version (React 19 docs, Pydantic v2 docs, etc.) so the agent gets up-to-date API references. DeepWiki indexes entire GitHub repositories — code, README, architecture, issues — so the agent can answer 'where is X implemented in this repo' or 'how does the auth flow work'. Use Context7 for library docs, DeepWiki for repo exploration. They compose well: ask Context7 for the framework, ask DeepWiki for how a specific OSS project uses it.
How do I add DeepWiki MCP to Claude Code?
One line: `claude mcp add -s user -t http deepwiki https://mcp.deepwiki.com/mcp`. The `-s user` flag registers it at user scope so it works across every project; swap to `-s project` if you only want this repo to see DeepWiki. The `-t http` flag selects the Streamable HTTP transport — required for the remote endpoint. Restart Claude Code if it doesn't show up in the tool list immediately.
What tools does the DeepWiki MCP server expose?
Three: `read_wiki_structure` (lists the wiki sections for a GitHub repo), `read_wiki_contents` (returns the documentation body for a section), and `ask_question` (natural-language Q&A grounded in the repo). A typical agent flow is structure-then-ask: read the structure to find the right section, then ask a focused question. Skipping the structure step works for simple queries but burns context on long repos.
Does DeepWiki work on private GitHub repos?
Not via the free public endpoint. `mcp.deepwiki.com/mcp` indexes public repos only — the same set covered by deepwiki.com (which mirrors github.com URLs by swapping the domain). Private-repo support exists, but goes through Devin: a Devin account, the separate Devin MCP server, and the repo connected to your Devin workspace. The public MCP is intentionally read-only against the public index.
How does DeepWiki MCP compare to GitHub's own MCP server?
Different jobs. The official GitHub MCP server gives the agent operational access to your repos — open PRs, file issues, read files at specific commits, manage releases. DeepWiki is a one-way Q&A interface over an auto-generated wiki for public repos: the agent learns about a codebase but cannot write to it. Run both in parallel when you're contributing to OSS — DeepWiki for orientation, GitHub MCP for the actual edits.
Is the regenrek/deepwiki-mcp NPM package the same as the official DeepWiki MCP?
No. `regenrek/deepwiki-mcp` is a community wrapper that crawled deepwiki.com pages and converted them to Markdown for use in clients that didn't speak Streamable HTTP yet. Its README now states it's `currently not working since DeepWiki has cut off the possibility to scrape it`. Use the official endpoint `https://mcp.deepwiki.com/mcp` instead — it's what Cognition publishes and supports.
Can DeepWiki hallucinate about the repo it's indexing?
Yes — auto-generated wikis are good for orientation but not gospel. On the launch HN thread, users reported DeepWiki listing Buck as LibreOffice's primary build system (factually wrong) and similar inaccuracies on LLVM. Treat DeepWiki answers as a strong starting hypothesis, then verify against the actual source files. The `ask_question` tool is most reliable on architecture and code-location queries; less so on niche build-system or history claims.
Where is the DeepWiki MCP server official documentation?
Two pages worth bookmarking. The product launch is at `cognition.ai/blog/deepwiki` (May 5, 2025 — DeepWiki itself) and `cognition.ai/blog/deepwiki-mcp-server` (May 22, 2025 — the MCP server). The setup-focused reference page is `docs.devin.ai/work-with-devin/deepwiki-mcp` — that's where Cognition publishes the canonical endpoint URL, transport choice, and Claude Code / Cursor / Windsurf install snippets. We mirror those install paths in the section below.
Sources
- DeepWiki launch announcement: cognition.ai/blog/deepwiki (May 5, 2025)
- DeepWiki MCP server launch: cognition.ai/blog/deepwiki-mcp-server (May 22, 2025)
- Setup reference (endpoint, transport, install snippets): docs.devin.ai/work-with-devin/deepwiki-mcp
- DeepWiki web index (for browsing wiki content manually): deepwiki.com
- Catalog page (install panel + transport details): mcp.directory/servers/deepwiki
Comparison
Context7 vs DeepWiki vs Ref vs Docfork (2026)
ReadSibling guide
Context7 MCP Server: Complete Setup Guide (2026)
ReadCLI Agents
Goose vs Cline vs Aider vs Claude Code vs OpenCode
ReadFound an issue?
If something in this guide is out of date — a new install pattern, a renamed tool, a DeepWiki feature we missed — email [email protected] or read more in our about page. We keep these guides current.