
Playwright Scraper
Scrapes web content from JavaScript-heavy sites using Playwright browser automation and converts it to clean Markdown format.
Leverages Playwright and BeautifulSoup to enable robust web scraping and content extraction, converting complex JavaScript-heavy web pages into high-quality Markdown with browser automation capabilities.
What it does
- Scrape JavaScript-heavy websites
- Convert web content to Markdown
- Handle modern web pages with browser automation
- Parse and clean HTML content
- Control SSL certificate verification
Best for
About Playwright Scraper
Playwright Scraper is a community-built MCP server published by dennisgl that provides AI assistants with tools and capabilities via the Model Context Protocol. Leverage advanced web scraping in Node.js using Playwright and BeautifulSoup for robust JavaScript web scraping and cont It is categorized under search web. This server exposes 1 tool that AI clients can invoke during conversations and coding sessions.
How to install
You can install Playwright Scraper in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Playwright Scraper is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (1)
Scrape a URL and convert the content to markdown
mcp-playwright-scraper
A Model Context Protocol (MCP) server that scrapes web content and converts it to Markdown.
Overview
This MCP server provides a simple tool for scraping web content and converting it to Markdown format. It uses:
- Playwright: For headless browser automation to handle modern web pages including JavaScript-heavy sites
- BeautifulSoup: For HTML parsing and cleanup
- Pypandoc: For high-quality HTML to Markdown conversion
Tools
The server implements a single tool:
scrape_to_markdown: Scrapes content from a URL and converts it to Markdown- Required parameter:
url(string) - The URL to scrape - Optional parameter:
verify_ssl(boolean) - Whether to verify SSL certificates (default: true)
- Required parameter:
Installation
Using uv (recommended)
When using uv no specific installation is needed. We will
use uvx to directly run mcp-playwright-scraper.
Using PIP
Alternatively you can install mcp-playwright-scraper via pip:
pip install mcp-playwright-scraper
After installation, you can run it as a script using:
python -m mcp_playwright_scraper
Prerequisites
- Python 3.11 or higher
- Playwright browser dependencies
- Pandoc (optional, will be automatically installed by pypandoc if possible)
After installation, you need to install Playwright browser dependencies:
playwright install --with-deps chromium
Configuration
Usage with Claude Desktop
Add this to your claude_desktop_config.json:
Using uvx
"mcpServers": {
"mcp-playwright-scraper": {
"command": "uvx",
"args": ["mcp-playwright-scraper"]
}
}
Using pip installation
"mcpServers": {
"mcp-playwright-scraper": {
"command": "python",
"args": ["-m", "mcp_playwright_scraper"]
}
}
Usage with Claude Code
# Basic syntax
$ claude mcp add mcp-playwright-scraper -- uvx mcp-playwright-scraper
# Alternatively, with pip installation
$ claude mcp add mcp-playwright-scraper -- python -m mcp_playwright_scraper
Development/Unpublished Servers Configuration
"mcpServers": {
"mcp-playwright-scraper": {
"command": "uv",
"args": [
"--directory",
"/path/to/mcp-playwright-scraper",
"run",
"mcp-playwright-scraper"
]
}
}
Usage with Zed
Add to your Zed settings.json:
Using uvx
"context_servers": [
"mcp-playwright-scraper": {
"command": {
"path": "uvx",
"args": ["mcp-playwright-scraper"]
}
}
],
Using pip installation
"context_servers": {
"mcp-playwright-scraper": {
"command": "python",
"args": ["-m", "mcp_playwright_scraper"]
}
},
Usage with Cursor
- Open Cursor Settings
- Navigate to Cursor Settings > Features > MCP
- Click the "+ Add New MCP Server" button
- Configure the Server
- Name:
mcp-playwright-scraper - Type: Select
stdio - Command: Enter one of the following:
- Name:
Using uvx
uvx mcp-playwright-scraper
Using pip installation
python -m mcp_playwright_scraper
Usage
Once configured in Claude Desktop, you can explicitly use the scraper with a prompt like:
Use the mcp-playwright-scraper to scrape the content from https://example.com and summarize it.
Debugging
You can use the MCP inspector to debug the server:
npx @modelcontextprotocol/inspector uvx mcp-playwright-scraper
Or if you've installed the package in a specific directory or are developing on it:
cd path/to/mcp-playwright-scraper
npx @modelcontextprotocol/inspector uv run mcp-playwright-scraper
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
Development
Building and Publishing
To prepare the package for distribution:
- Sync dependencies and update lockfile:
uv sync
- Build package distributions:
uv build
This will create source and wheel distributions in the dist/ directory.
- Publish to PyPI:
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token:
--tokenorUV_PUBLISH_TOKEN - Or username/password:
--username/UV_PUBLISH_USERNAMEand--password/UV_PUBLISH_PASSWORD
License
This MCP server is licensed under the Apache License, Version 2.0. You are free to use, modify, and distribute the software, subject to the terms and conditions of the Apache License 2.0. For more details, please see the LICENSE file in the project repository or visit http://www.apache.org/licenses/LICENSE-2.0.
Alternatives
Related Skills
Browse all skillsSearch the web using multiple search engines simultaneously (Bing, Yahoo, Startpage, Aol, Ask) via async-search-scraper, iterating through result pages.
Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browser logs.
Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation
Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with web pages, fill forms, take screenshots, test web applications, or extract information from web pages.
Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards — purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.