
Scraper.is
OfficialConnects to the Scraper.is API to extract web content and convert it to structured formats like Markdown or JSON. Takes screenshots of web pages for visual analysis.
Integrates with Scraper.is API to enable web content extraction, structured data parsing, and Markdown conversion for tasks like product research, news aggregation, and content analysis.
What it does
- Extract content from any website
- Convert web pages to Markdown format
- Capture screenshots of web pages
- Parse structured data from websites
- Get content in HTML or JSON formats
- Track scraping progress in real-time
Best for
About Scraper.is
Scraper.is is an official MCP server published by ai-quill that provides AI assistants with tools and capabilities via the Model Context Protocol. Integrate with Scraper.is API for efficient web scraping, data extraction, and web page scraping from any website, perfe It is categorized under search web.
How to install
You can install Scraper.is in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Scraper.is is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Scraper.is MCP
A Model Context Protocol (MCP) integration for Scraper.is - A powerful web scraping tool for AI assistants.
This package allows AI assistants to scrape web content through the MCP protocol, enabling them to access up-to-date information from the web.
Features
- ๐ Web Scraping: Extract content from any website
- ๐ธ Screenshots: Capture visual representations of web pages
- ๐ Multiple Formats: Get content in markdown, HTML, or JSON
- ๐ Progress Updates: Real-time progress reporting during scraping operations
- ๐ MCP Integration: Seamless integration with MCP-compatible AI assistants
Installation
Installing via Smithery
To install scaperis-mcp for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @Ai-Quill/scaperis-mcp --client claude
Manual Installation
npm install -g scraperis-mcp
Or with yarn:
yarn global add scraperis-mcp
Prerequisites
You need a Scraper.is API key to use this package.
Getting Your API Key
- Sign up or log in at scraper.is
- Navigate to the API Keys section in your dashboard: https://www.scraper.is/dashboard/apikeys
- Create a new API key or copy your existing key
- Store this key securely as you'll need it to use this package
Usage
Environment Setup
Create a .env file with your Scraper.is API key:
SCRAPERIS_API_KEY=your_api_key_here
Claude Desktop Integration
To use this package with Claude Desktop:
-
Install the package globally:
npm install -g scraperis-mcp -
Add the following configuration to your
claude_desktop_config.jsonfile:{ "mcpServers": { "scraperis_scraper": { "command": "scraperis-mcp", "args": [], "env": { "SCRAPERIS_API_KEY": "your-api-key-here", "DEBUG": "*" } } } } -
Replace
your-api-key-herewith your actual Scraper.is API key. -
Restart Claude Desktop to apply the changes.
Running with MCP Inspector
For development and testing, you can use the MCP Inspector:
npx @modelcontextprotocol/inspector scraperis-mcp
Integration with AI Assistants
This package is designed to be used with AI assistants that support the Model Context Protocol (MCP). When properly configured, the AI assistant can use the following tools:
Scrape Tool
The scrape tool allows the AI to extract content from websites. It supports various formats:
markdown: Returns the content in markdown formathtml: Returns the content in HTML formatscreenshot: Returns a screenshot of the webpagejson: Returns structured data in JSON format
Example prompt for the AI:
Can you scrape the latest news from techcrunch.com and summarize it for me?
API Reference
Tools
scrape
Scrapes content from a webpage based on a prompt.
Parameters:
prompt(string): The prompt describing what to scrape, including the URLformat(string): The format to return the content in (markdown,html,screenshot,json,quick)
Example:
{
"prompt": "Get me the top 10 products from producthunt.com",
"format": "markdown"
}
Development
Setup
-
Clone the repository:
git clone https://github.com/Ai-Quill/scraperis-mcp.git cd scraperis-mcp -
Install dependencies:
npm install -
Build the project:
npm run build
Scripts
npm run build: Build the projectnpm run watch: Watch for changes and rebuildnpm run dev: Run with MCP Inspector for developmentnpm run test: Run testsnpm run lint: Run ESLint
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgements
Alternatives
Related Skills
Browse all skillsOfficial Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation
Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards โ purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.
Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.
Search Engine Optimization specialist for content strategy, technical SEO, keyword research, and ranking improvements. Use when optimizing website content, improving search rankings, conducting keyword analysis, or implementing SEO best practices. Expert in on-page SEO, meta tags, schema markup, and Core Web Vitals.
Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research