Prysm Web Scraper
Extracts content from web pages with three different scraping modes (focused, balanced, deep) and can process images and handle pagination.
Provides web scraping capabilities with three specialized tools (scrapeFocused, scrapeBalanced, scrapeDeep) for efficient content extraction, image processing, and pagination handling with customizable parameters.
What it does
- Scrape web pages with focused, balanced, or deep extraction modes
- Extract and optionally download images from web pages
- Handle single-page applications with smart scrolling
- Format scraped content as markdown, HTML, or JSON
- Analyze URLs to determine optimal scraping approach
- Save formatted results to files
Best for
About Prysm Web Scraper
Prysm Web Scraper is a community-built MCP server published by pinkpixel-dev that provides AI assistants with tools and capabilities via the Model Context Protocol. Prysm Web Scraper is an efficient web scraping tool offering focused, balanced, and deep scraping for flexible content a It is categorized under search web.
How to install
You can install Prysm Web Scraper in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Prysm Web Scraper is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
π Prysm MCP Server
The Prysm MCP (Model Context Protocol) Server enables AI assistants like Claude and others to scrape web content with high accuracy and flexibility.
β¨ Features
- π― Multiple Scraping Modes: Choose from focused (speed), balanced (default), or deep (thorough) modes
- π§ Content Analysis: Analyze URLs to determine the best scraping approach
- π Format Flexibility: Format results as markdown, HTML, or JSON
- πΌοΈ Image Support: Optionally extract and even download images
- π Smart Scrolling: Configure scroll behavior for single-page applications
- π± Responsive: Adapts to different website layouts and structures
- πΎ File Output: Save formatted results to your preferred directory
π Quick Start
Installation
# Recommended: Install the LLM-optimized version
npm install -g @pinkpixel/prysm-mcp
# Or install the standard version
npm install -g prysm-mcp
# Or clone and build
git clone https://github.com/pinkpixel-dev/prysm-mcp.git
cd prysm-mcp
npm install
npm run build
Integration Guides
We provide detailed integration guides for popular MCP-compatible applications:
- Cursor Integration Guide
- Claude Desktop Integration Guide
- Windsurf Integration Guide
- Cline Integration Guide
- Roo Code Integration Guide
- Open WebUI Integration Guide
Usage
There are multiple ways to set up Prysm MCP Server:
Using mcp.json Configuration
Create a mcp.json file in the appropriate location according to the above guides.
{
"mcpServers": {
"prysm-scraper": {
"description": "Prysm web scraper with custom output directories",
"command": "npx",
"args": [
"-y",
"@pinkpixel/prysm-mcp"
],
"env": {
"PRYSM_OUTPUT_DIR": "${workspaceFolder}/scrape_results",
"PRYSM_IMAGE_OUTPUT_DIR": "${workspaceFolder}/scrape_results/images"
}
}
}
}
π οΈ Tools
The server provides the following tools:
scrapeFocused
Fast web scraping optimized for speed (fewer scrolls, main content only).
Please scrape https://example.com using the focused mode
Available Parameters:
url(required): URL to scrapemaxScrolls(optional): Maximum number of scroll attempts (default: 5)scrollDelay(optional): Delay between scrolls in ms (default: 1000)scrapeImages(optional): Whether to include images in resultsdownloadImages(optional): Whether to download images locallymaxImages(optional): Maximum images to extractoutput(optional): Output directory for downloaded images
scrapeBalanced
Balanced web scraping approach with good coverage and reasonable speed.
Please scrape https://example.com using the balanced mode
Available Parameters:
- Same as
scrapeFocusedwith different defaults maxScrollsdefault: 10scrollDelaydefault: 2000- Adds
timeoutparameter to limit total scraping time (default: 30000ms)
scrapeDeep
Maximum extraction web scraping (slower but thorough).
Please scrape https://example.com using the deep mode with maximum scrolls
Available Parameters:
- Same as
scrapeFocusedwith different defaults maxScrollsdefault: 20scrollDelaydefault: 3000maxImagesdefault: 100
formatResult
Format scraped data into different structured formats (markdown, HTML, JSON).
Format the scraped data as markdown
Available Parameters:
data(required): The scraped data to formatformat(required): Output format - "markdown", "html", or "json"includeImages(optional): Whether to include images in output (default: true)output(optional): File path to save the formatted result
You can also save formatted results to a file by specifying an output path:
Format the scraped data as markdown and save it to "my-results/output.md"
βοΈ Configuration
Output Directory
By default, when saving formatted results, files will be saved to ~/prysm-mcp/output/. You can customize this in two ways:
- Environment Variables: Set environment variables to your preferred directories:
# Linux/macOS
export PRYSM_OUTPUT_DIR="/path/to/custom/directory"
export PRYSM_IMAGE_OUTPUT_DIR="/path/to/custom/image/directory"
# Windows (Command Prompt)
set PRYSM_OUTPUT_DIR=C:\path\to\custom\directory
set PRYSM_IMAGE_OUTPUT_DIR=C:\path\to\custom\image\directory
# Windows (PowerShell)
$env:PRYSM_OUTPUT_DIR="C:\path\to\custom\directory"
$env:PRYSM_IMAGE_OUTPUT_DIR="C:\path\to\custom\image\directory"
- Tool Parameter: Specify output paths directly when calling the tools:
# For general results
Format the scraped data as markdown and save it to "/absolute/path/to/file.md"
# For image downloads when scraping
Please scrape https://example.com and download images to "/absolute/path/to/images"
- MCP Configuration: In your MCP configuration file (e.g.,
.cursor/mcp.json), you can set these environment variables:
{
"mcpServers": {
"prysm-scraper": {
"command": "npx",
"args": ["-y", "@pinkpixel/prysm-mcp"],
"env": {
"PRYSM_OUTPUT_DIR": "${workspaceFolder}/scrape_results",
"PRYSM_IMAGE_OUTPUT_DIR": "${workspaceFolder}/scrape_results/images"
}
}
}
}
If PRYSM_IMAGE_OUTPUT_DIR is not specified, it will default to a subfolder named images inside the PRYSM_OUTPUT_DIR.
If you provide only a relative path or filename, it will be saved relative to the configured output directory.
Path Handling Rules
The formatResult tool handles paths in the following ways:
- Absolute paths: Used exactly as provided (
/home/user/file.md) - Relative paths: Saved relative to the configured output directory (
subfolder/file.md) - Filename only: Saved in the configured output directory (
output.md) - Directory path: If the path points to a directory, a filename is auto-generated based on content and timestamp
ποΈ Development
# Install dependencies
npm install
# Build the project
npm run build
# Run the server locally
node bin/prysm-mcp
# Debug MCP communication
DEBUG=mcp:* node bin/prysm-mcp
# Set custom output directories
PRYSM_OUTPUT_DIR=./my-output PRYSM_IMAGE_OUTPUT_DIR=./my-output/images node bin/prysm-mcp
Running via npx
You can run the server directly with npx without installing:
# Run with default settings
npx @pinkpixel/prysm-mcp
# Run with custom output directories
PRYSM_OUTPUT_DIR=./my-output PRYSM_IMAGE_OUTPUT_DIR=./my-output/images npx @pinkpixel/prysm-mcp
π License
MIT
π Credits
Developed by Pink Pixel
Powered by the Model Context Protocol and Puppeteer
Alternatives
Related Skills
Browse all skillsSearch the web using multiple search engines simultaneously (Bing, Yahoo, Startpage, Aol, Ask) via async-search-scraper, iterating through result pages.
Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation
Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards β purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.
Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.
Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research
Scrape and extract web content, convert HTML to markdown, and bypass bot protection for dynamic sites using Firecrawl API.