Updated May 2026Comparison19 min read

Chrome DevTools MCP vs Playwright MCP vs Puppeteer (Tested 2026)

Three browser-automation MCP servers, three different opinions about how an LLM should drive a browser. Chrome DevTools MCP from Google’s Chrome team. Playwright MCP from Microsoft. Puppeteer MCP from Anthropic. We pulled every fact from each repo’s README, source, and package metadata — no review-aggregator slop, no fabricated benchmarks.

Editorial illustration: three luminous browser windows in a triangle — Chrome DevTools, Playwright, and Puppeteer — each rendering an accessibility tree node, connected by teal automation arrows on a midnight navy background.
On this page · 13 sections
  1. TL;DR + decision tree
  2. What browser MCP servers do
  3. Side-by-side matrix
  4. Chrome DevTools MCP — install + recipe
  5. Playwright MCP — install + recipe
  6. Puppeteer MCP — install + recipe
  7. Pricing + hosting cost
  8. Free / open-source alternatives
  9. Benchmark them yourself
  10. Common pitfalls
  11. Community signal
  12. FAQ
  13. Sources

TL;DR + decision tree

  • Need DOM interaction at scale with deterministic clicks and form fills across multiple browsers? Playwright MCP. Microsoft’s accessibility-snapshot model is the right default for agentic workflows because it’s text-only, fast, and cross-browser.
  • Need full Chromium control with profiling — Core Web Vitals traces, console messages, network inspection, performance insights? Chrome DevTools MCP. The Chrome team built it on the DevTools Protocol; nothing else exposes CDP this directly.
  • Need a minimal, battle-tested reference server that just navigates, clicks, screenshots and evaluates JS? Puppeteer MCP. Chromium-only, seven tools, in the official modelcontextprotocol/servers repo. The “hello world” of browser MCP.
  • Need headless cross-browser at scale? Playwright MCP again — same answer. The other two are Chromium-only.

We’ll cover each in detail below — feature matrix first, then per-tool install (the same canonical install card from each server’s detail page), pricing reality, and a benchmark methodology you can run in fifteen minutes on your own laptop.

What browser-automation MCP servers do

These three servers solve the same underlying problem in different ways: an LLM that wants to test a webapp, scrape a page, debug a regression, or measure performance needs a way to drive a browser without writing Playwright or Puppeteer code from scratch every time. Browser-automation MCP servers expose a fixed tool surface — navigate, click, fill, evaluate — that the model invokes through the JSON-RPC wire format MCP defines.

The differences come down to four axes:

  1. How the agent perceives the page. Chrome DevTools MCP returns an accessibility-tree snapshot (take_snapshot) plus optional screenshots. Playwright MCP’s default browser_snapshot returns the a11y tree (the README explicitly notes “this is better than screenshot”). Puppeteer MCP returns screenshots only — the model has to reason visually.
  2. Browser scope. Chrome DevTools MCP and Puppeteer MCP are Chromium-only. Playwright MCP supports Chromium, Firefox, and WebKit through Playwright’s cross-engine layer.
  3. Profiling depth. Chrome DevTools MCP exposes performance_start_trace, performance_stop_trace, and performance_analyze_insight — full Chrome performance traces with Core Web Vitals scoring. Neither Playwright MCP nor Puppeteer MCP exposes profiling primitives directly; you have to evaluate JS to get metrics.
  4. Maintainer cadence. Microsoft ships @playwright/mcp alongside Playwright proper. Google’s Chrome team owns chrome-devtools-mcp. Puppeteer MCP lives in the broader modelcontextprotocol/servers reference repo and moves at the protocol cadence, not the Puppeteer cadence.

If you’re new to the protocol underneath, our What is MCP primer covers the JSON-RPC wire format these servers run on. The rest of this post assumes you know that already.

Side-by-side matrix

Every cell in this matrix is sourced from each tool’s repo, README, or package metadata (citations in the per-tool sections below). Live values checked 2026-05-08 against MCP.Directory’s indexed catalog at /servers.

DimensionChrome DevTools MCPPlaywright MCPPuppeteer MCP
MaintainerChrome DevTools team (Google)Microsoft (Playwright team)Anthropic (modelcontextprotocol)
LicenseApache 2.0Apache 2.0MIT
npm packagechrome-devtools-mcp@playwright/mcp@modelcontextprotocol/server-puppeteer
Install commandnpx chrome-devtools-mcp@latestnpx @playwright/mcp@latestnpx -y @modelcontextprotocol/server-puppeteer
Transportstdiostdiostdio
BrowsersChrome / ChromiumChromium, Firefox, WebKitChromium
Headless modeYes (defaults to attached)Yes (--headless)Yes (default)
Page perceptiona11y snapshot + screenshots + tracesa11y snapshot (default) + screenshotsScreenshots + JS evaluate
Tools (count)26227
OAuth 2.1n/a (local stdio)n/a (local stdio)n/a (local stdio)
Performance traces✅ Built-in (CWV)❌ Use evaluate❌ Use evaluate
GitHub stars (live)3,7737,3315,622
Last commit2025-11-212025-11-222025-11-22
Free tierFree / OSSFree / OSSFree / OSS
MCP.Directory page/servers/chrome-devtools/servers/playwright-browser-automation/servers/puppeteer

Three things jump out of the matrix. Microsoft’s Playwright MCP has the largest star count (7,331) and the broadest browser support — that’s the dominant ecosystem signal for cross-browser work. Chrome DevTools MCP is the only one with built-in performance tracing, which is the single most important capability if your agent needs to measure Web Vitals or diagnose a slow render. Puppeteer MCP is the smallest tool surface (7 tools), which is a feature, not a bug, if you want a minimal dependency for a small set of automation tasks.

Chrome DevTools MCP — install + recipe

What it does best

This is the only one of the three that turns the Chrome DevTools Protocol into agent tools. The headline trio —performance_start_trace, performance_stop_trace, performance_analyze_insight — gives an LLM the same Core Web Vitals breakdown a human gets from the Performance panel, plus structured insights it can act on. Pair that with list_console_messages andlist_network_requests and the agent can diagnose a slow render end-to-end, then patch the offending source in the same conversation.

Pick this if you...

  • Need real Core Web Vitals numbers (LCP, INP, CLS) from inside an agent loop, not a separate Lighthouse run.
  • Want the agent to inspect console errors and network waterfalls while it’s debugging a page.
  • Are already standardized on Chrome and don’t need Firefox or WebKit coverage.
  • Plan to chain “measure, then fix” in one conversation — perf data into a code edit.

Recipe: profile a checkout flow and report the worst Web Vitals

Drop this into Cursor or Claude Code with Chrome DevTools MCP installed:

Use chrome-devtools. Open a new page and navigate to
https://shop.example.com/checkout. Start a performance trace,
click "Add to cart", wait for the cart drawer to render, then
stop the trace. Run performance_analyze_insight on every
highlighted insight. Then call list_console_messages and
list_network_requests. Return JSON with: LCP, INP, CLS, the
three slowest network requests with status + duration, and the
top three performance insights with file paths or URLs where
applicable.

The agent invokes new_page, navigate_page, performance_start_trace, the click, thenperformance_stop_trace, and walks each insight via performance_analyze_insight. The console + network calls fold in alongside the Web Vitals, so a follow-up “now patch the worst offender” message can edit the source file in the same thread.

Skip it if...

You need cross-browser coverage. Chrome DevTools MCP only attaches to Chrome / Chromium via CDP — Firefox- or WebKit-specific bugs need Playwright MCP instead. It also launches a debug Chrome that doesn’t share your normal profile, so logged-in flows need an explicit--user-data-dir or in-agent sign-in.

Playwright MCP — install + recipe

What it does best

Built around the accessibility-snapshot model: browser_snapshot hands the model a structured a11y tree with stable element refs instead of a screenshot. Clicks and form fills target those nodes, so action chains survive layout reflows that would break pixel-driven automation. It’s text-only, deterministic, dramatically cheaper in tokens, and the only one of the three that drives Chromium, Firefox, and WebKit from a single tool surface — meaning one prompt can reproduce a regression across all three engines.

Pick this if you...

  • Need cross-browser coverage — Chromium, Firefox, and WebKit all from one server.
  • Want deterministic action chains (click, fill, drag, keypress) instead of screenshot-and-guess.
  • Care about token cost — a11y snapshots are far cheaper than describing images.
  • Are reproducing a regression a customer reported in a specific browser.

Recipe: reproduce a Firefox-only checkout regression

Drop this into your agent with @playwright/mcp installed:

Use playwright. Launch with --browser firefox. Navigate to
https://my-app.example.com/checkout. Take a snapshot, find the
"Email" field, fill with [email protected], find "Continue"
and click. Take another snapshot, find the "Card number" field,
fill 4242 4242 4242 4242, fill expiry 12/30, fill CVC 123,
click "Pay $24.99". Wait for "Order confirmed" text. If it
doesn't appear within 5s, switch --browser to chromium and run
the same flow. Return a diff describing where Firefox diverged.

Each browser_snapshot returns stable refs the model passes to browser_click and browser_fill_form, so clicks land on the correct nodes even when the layout reflows mid-flow. Switching --browser mid-conversation keeps the same tool calls but swaps engines, which is the whole point of the cross-browser surface.

Skip it if...

You need DevTools-Protocol-grade profiling — there’s no equivalent of performance_start_trace here, so Core Web Vitals work belongs to Chrome DevTools MCP. First-run cost is also real: browser_install pulls Chromium + Firefox + WebKit, which can run 30–90s on a cold CI machine that doesn’t cache the binaries.

Puppeteer MCP — install + recipe

What it does best

This is the “just the basics” option: puppeteer_navigate, puppeteer_screenshot, puppeteer_click, puppeteer_fill,puppeteer_select, puppeteer_hover, puppeteer_evaluate. Seven tools, no accessibility-snapshot abstraction — the agent either screenshots and reasons visually or runs JS against the DOM. As the reference server in the canonical modelcontextprotocol/servers repo, it’s the smallest dependency you can get for “open a page, look at it, click something, read the result.”

Pick this if you...

  • Want the smallest possible tool surface — seven tools, nothing fancy.
  • Already have Chromium installed and don’t want a second browser bundle.
  • Need quick screenshot-driven smoke tests rather than deterministic E2E flows.
  • Trust an agent that can drop into puppeteer_evaluate to read the DOM directly when needed.

Recipe: 60-second staging smoke test

Drop this into your agent with @modelcontextprotocol/server-puppeteer installed:

Use puppeteer. Navigate to https://staging.example.com.
Take a screenshot. Use puppeteer_evaluate to return JSON with:
document.title, the count of <img> tags with naturalWidth === 0,
and the URLs from window.performance.getEntriesByType('resource')
that have transferSize === 0 (likely failed). Then puppeteer_click
the primary nav link "Pricing" and screenshot again. Report both
screenshots' general layout plus the JSON of broken assets.

Two screenshots, one navigation, one evaluate call. The agent uses the first screenshot to confirm the page rendered, the JS evaluate to surface broken images and failed network requests, and the second screenshot to prove the click landed on the right page. No browser binaries to provision, no a11y snapshot to parse.

Skip it if...

You need Firefox or WebKit (Puppeteer is Chromium-only), or you want the agent to do real form-heavy E2E work — Playwright MCP’s a11y-snapshot model is more reliable than asking a model to interpret a screenshot. In Docker / locked-down envs you’ll also hit the “could not find Chrome” error unless you setPUPPETEER_EXECUTABLE_PATH or pre-install Chromium.

Pricing + hosting cost

All three servers are free and open source. There is no paid tier for any of them and no hosted version to pay for — they all run as local stdio processes spawned via npx. The actual cost is infrastructure:

Cost lineChrome DevTools MCPPlaywright MCPPuppeteer MCP
Server licenseFree (Apache 2.0)Free (Apache 2.0)Free (MIT)
API key requiredNoNoNo
Browser binarySystem Chrome (existing)Auto-downloaded (Chromium/Firefox/WebKit)Auto-downloaded Chromium
Disk footprintNegligible (uses system Chrome)~700 MB (all three browsers)~280 MB (Chromium)
Memory at idle~150 MB (Chrome attached)~250 MB (Chromium)~200 MB (Chromium)
CI cacheJust system ChromeCache /ms-playwrightCache /.cache/puppeteer
Cloud-host friendlyLimited (needs real Chrome)Yes (Docker images published)Yes (well-documented Docker)

Disk and memory numbers are approximate orders-of-magnitude from each project’s published install footprints, not a single benchmark run. Your numbers will vary by OS, Node version, and which browsers you actually exercise.

Free and open-source alternatives — Firecrawl, Browserbase, and friends

The three servers in this post all live in the same niche: drive a real browser locally. There’s a parallel category of MCP servers that solve adjacent problems and are worth knowing about — sometimes you don’t need browser automation at all, you just need clean Markdown from a URL, or you can’t run Chrome locally and need a managed cloud session.

Want clean Markdown from any URL, no UI driving?

Firecrawl MCP handles render-and-parse as a hosted API. Free tier plus paid plans on usage. Right tool when you want page contents, not interaction.

Need a managed remote browser (CI runners, Lambda, locked-down machines)?

Browserbase MCP gives an agent a remote Chrome session in their cloud. Right tool when local Chrome won’t work.

Want crawl-the-web as an MCP, not single URLs?

Firecrawl plus newer entrants (anycrawl, crawleo) all expose multi-URL crawling as MCP tools. None of them drive interactive flows the way Playwright MCP does; they’re scrape-and-parse layers. We compare them in our Firecrawl vs anycrawl vs crawleo vs Playwright (placeholder — post in progress) deep-dive.

Want a free-forever hosted browser MCP with no quota?

Doesn’t exist. Hosted-cloud browser sessions cost real money (Browserbase, anchor browsers, etc. all meter you). The only path to “unbounded free” is running one of the three servers in this post against your local Chrome.

Benchmark them yourself

We don’t publish a one-shot latency benchmark in this post. CPU, OS, Node version, network conditions, and target page all affect the result enough that a single run from one machine isn’t representative. Spend fifteen minutes:

# 1. Cold-start time — how long from the first tool call to a
#    rendered page?
time npx chrome-devtools-mcp@latest --version
time npx @playwright/mcp@latest --version
time npx -y @modelcontextprotocol/server-puppeteer --version

# 2. Single-page scrape latency. Pick a stable page (e.g.
#    https://example.com) and measure navigate + snapshot time
#    averaged over 10 calls per server.

# 3. Memory footprint while attached. After connection,
#    /usr/bin/time -l (macOS) or /usr/bin/time -v (Linux) on
#    the spawned node process plus the Chrome PID it controls.

# 4. Error rate on a 100-page sample. Pick 100 URLs from your
#    real workload (sitemap, top traffic, whatever). Run each
#    through navigate + snapshot. Record HTTP errors, JS
#    timeouts, snapshot failures.

# 5. Tool-surface noise. Count tokens used by each server's
#    tool descriptions in the initial system prompt. Bigger
#    surface = more standing context cost per turn.

# Compare:
#   - Chrome DevTools MCP (26 tools, perf traces native)
#   - Playwright MCP (22 tools, a11y default, 3 browsers)
#   - Puppeteer MCP (7 tools, screenshot-only)

The result is workload-specific. Performance-oriented workloads — “why is this page slow” agents — lean toward Chrome DevTools MCP because the trace tools are native. Cross-browser regression checks lean toward Playwright MCP. Quick scrape-and-screenshot scripts lean toward Puppeteer MCP because the surface is small enough that the model rarely picks a wrong tool. Run the methodology, don’t trust ours.

Common pitfalls (regardless of which one you pick)

Stacking all three in one MCP config

That’s 26 + 22 + 7 = 55 browser-related tool descriptions in every prompt — the model gets confused about which server to call and burns context on tool-selection reasoning. Pick one per workspace. If you genuinely need profiling and cross-browser coverage, run two configs (one per project), not all three at once.

Playwright requires browser binaries; Puppeteer assumes Chromium is in PATH

Playwright MCP’s browser_install tool triggers the binary download lazily on first use, which can fail in restricted networks. Puppeteer downloads Chromium at npm-install time unless you set PUPPETEER_SKIP_DOWNLOAD=1. Chrome DevTools MCP attaches to your existing Chrome — no download but needs Chrome to actually be installed locally.

Treating screenshots as the default

For Playwright MCP and Chrome DevTools MCP, the accessibility-snapshot tool is the right default — it returns text the model can reason over without spending a thousand tokens describing a screenshot. Reach for browser_take_screenshot / take_screenshot only when you need pixels (visual-regression, OCR, a real preview for the user).

Headed mode for long-running agents

Chrome DevTools MCP attaches to a real Chrome window by default — fine for ad-hoc debugging, terrible for overnight agent runs because the OS will eventually decide to swap, lock, or sleep. For long-running work pass headless and isolate the user-data-dir.

Forgetting the cross-browser cost

Playwright MCP’s “works in Firefox and WebKit too” pitch is real, but the binaries it downloads add about 700 MB to disk and ~30 seconds to cold-start on a fresh machine. If you only ever automate Chromium, that overhead is dead weight — Puppeteer MCP’s smaller surface is genuinely lighter for single-browser workloads.

Community signal

The three servers occupy distinct niches in community discussion. Microsoft’s Playwright MCP gets the most ecosystem amplification on each new Playwright release — the team blogs about it on the official Playwright dev site, and tutorial authors gravitate toward it because Playwright already has the most tested cross-browser story. Chrome DevTools MCP arrived later, and most of the discussion centers on the performance-tracing capability — the ability for an agent to take a Chrome trace, read insights, and patch the responsible source file in a single conversation is the use case people consistently highlight. Puppeteer MCP is treated as the canonical reference: the “hello world” people install first when they want to try MCP, and the implementation people read when they want to understand the protocol.

We’re deliberately not posting verbatim Reddit/HN quotes here because the cluster of unverified “MCP is great” threads runs hot enough that we’d rather under-claim than misattribute. The primary signals to read instead: each project’s GitHub issues queue (it’s where the real complaints live), the Microsoft Playwright team’s release notes, and Google’s Chrome DevTools blog when they ship new tracing primitives.

Frequently asked questions

What's the simplest way to compare Chrome DevTools, Playwright, and Puppeteer MCP?

Chrome DevTools MCP from the Chrome team gives an agent direct DevTools-protocol access to a real Chrome browser, with built-in performance tracing and Core Web Vitals reporting. Playwright MCP from Microsoft uses accessibility-tree snapshots (not screenshots) for fast, deterministic actions across Chromium, Firefox, and WebKit. Puppeteer MCP from Anthropic is the original reference server — Chromium-only, screenshot-driven, with a small fixed tool surface. Pick by browser scope (one vs three), interaction style (a11y tree vs CDP vs screenshots), and whether you need profiling.

Is Chrome DevTools MCP free?

Yes. Chrome DevTools MCP is published by the chromedevtools GitHub organisation under the Apache 2.0 license at github.com/chromedevtools/chrome-devtools-mcp. The npm package chrome-devtools-mcp is free; the only cost is the local Chrome process the server attaches to. There is no hosted tier and no API key.

Does Playwright MCP support Firefox and WebKit?

Yes. Microsoft's Playwright MCP (github.com/microsoft/playwright-mcp, package @playwright/mcp) inherits Playwright's cross-browser engine: Chromium, Firefox, and WebKit are all supported via the same accessibility-snapshot tool surface. You select the browser at launch with --browser chromium|firefox|webkit. The server installs missing browser binaries automatically with the browser_install tool when an action fails because a binary is missing.

Can I use Puppeteer MCP without Chromium installed?

No. The official @modelcontextprotocol/server-puppeteer package wraps Puppeteer, which downloads a Chromium build on first install or expects one in PATH via PUPPETEER_EXECUTABLE_PATH. If you skip the download with PUPPETEER_SKIP_DOWNLOAD=1 you must point the server at an existing Chromium or Chrome binary. Puppeteer MCP does not support Firefox or WebKit.

Chrome DevTools vs Playwright — which is better for AI agents?

Different jobs. Chrome DevTools MCP wins when the agent needs to debug real-page behavior — Core Web Vitals traces, network requests, console messages, performance insights — because it speaks the Chrome DevTools Protocol natively. Playwright MCP wins when the agent needs deterministic action chains (click this, fill that, wait for selector) across browsers, because the accessibility-snapshot model is text-only, fast, and survives layout reflows that break screenshot-based clicks.

Why would I use Playwright MCP instead of Puppeteer MCP?

Three reasons. First, Playwright MCP returns an accessibility-tree snapshot the model can reason over directly, which is meaningfully cheaper in tokens and less error-prone than describing screenshots. Second, Playwright supports Firefox and WebKit; Puppeteer is Chromium-only. Third, Microsoft ships @playwright/mcp continuously alongside Playwright itself — the Puppeteer MCP is part of the modelcontextprotocol/servers reference repo and changes more slowly. The catch: Playwright requires browser binaries downloaded by browser_install before first use.

Can I use these MCP servers with Cursor, VS Code, and Claude Code?

Yes — all three are stdio servers that run via npx, which every major MCP client supports. Each tool's install card on this page covers Cursor, VS Code, Claude Desktop, Claude Code, Gemini CLI, Codex, Windsurf, ChatGPT Desktop, and a manual JSON config. Chrome DevTools MCP and Puppeteer MCP launch their own browser; Playwright MCP can attach to an existing Chrome via --cdp-endpoint when you already have a session running.

What about Firecrawl, Browserbase, and BB Browser MCP — are those alternatives?

They solve adjacent problems. Firecrawl MCP is a hosted scraping API — give it a URL, get clean Markdown back; it's the right tool when you want pages rendered and parsed, not when you need to drive a UI. Browserbase MCP gives an agent a managed remote browser session in their cloud, which is the answer when local Chrome won't work (CI runners, Lambda, locked-down corporate machines). For raw automation on your machine, the three servers in this post remain the canonical choices.

Sources

Chrome DevTools MCP

Playwright MCP

Puppeteer MCP

Internal links

Related comparisons

Keep reading