Scraper.is

Scraper.is

Official
ai-quill

Connects to the Scraper.is API to extract web content and convert it to structured formats like Markdown or JSON. Takes screenshots of web pages for visual analysis.

Integrates with Scraper.is API to enable web content extraction, structured data parsing, and Markdown conversion for tasks like product research, news aggregation, and content analysis.

8349 views5Local (stdio)

What it does

  • Extract content from any website
  • Convert web pages to Markdown format
  • Capture screenshots of web pages
  • Parse structured data from websites
  • Get content in HTML or JSON formats
  • Track scraping progress in real-time

Best for

Product research and competitive analysisNews aggregation and content monitoringWeb content analysis for researchExtracting data from websites for AI processing
Multiple output formats (Markdown, HTML, JSON)Real-time progress reportingScreenshot capture capability

About Scraper.is

Scraper.is is an official MCP server published by ai-quill that provides AI assistants with tools and capabilities via the Model Context Protocol. Integrate with Scraper.is API for efficient web scraping, data extraction, and web page scraping from any website, perfe It is categorized under search web.

How to install

You can install Scraper.is in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Scraper.is is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Scraper.is MCP

smithery badge npm version License: MIT

A Model Context Protocol (MCP) integration for Scraper.is - A powerful web scraping tool for AI assistants.

This package allows AI assistants to scrape web content through the MCP protocol, enabling them to access up-to-date information from the web.

Features

  • ๐ŸŒ Web Scraping: Extract content from any website
  • ๐Ÿ“ธ Screenshots: Capture visual representations of web pages
  • ๐Ÿ“„ Multiple Formats: Get content in markdown, HTML, or JSON
  • ๐Ÿ”„ Progress Updates: Real-time progress reporting during scraping operations
  • ๐Ÿ”Œ MCP Integration: Seamless integration with MCP-compatible AI assistants

Installation

Installing via Smithery

To install scaperis-mcp for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @Ai-Quill/scaperis-mcp --client claude

Manual Installation

npm install -g scraperis-mcp

Or with yarn:

yarn global add scraperis-mcp

Prerequisites

You need a Scraper.is API key to use this package.

Getting Your API Key

  1. Sign up or log in at scraper.is
  2. Navigate to the API Keys section in your dashboard: https://www.scraper.is/dashboard/apikeys
  3. Create a new API key or copy your existing key
  4. Store this key securely as you'll need it to use this package

Usage

Environment Setup

Create a .env file with your Scraper.is API key:

SCRAPERIS_API_KEY=your_api_key_here

Claude Desktop Integration

To use this package with Claude Desktop:

  1. Install the package globally:

    npm install -g scraperis-mcp
    
  2. Add the following configuration to your claude_desktop_config.json file:

    {
      "mcpServers": {
        "scraperis_scraper": {
          "command": "scraperis-mcp",
          "args": [],
          "env": {
            "SCRAPERIS_API_KEY": "your-api-key-here",
            "DEBUG": "*"
          }
        }
      }
    }
    
  3. Replace your-api-key-here with your actual Scraper.is API key.

  4. Restart Claude Desktop to apply the changes.

Running with MCP Inspector

For development and testing, you can use the MCP Inspector:

npx @modelcontextprotocol/inspector scraperis-mcp

Integration with AI Assistants

This package is designed to be used with AI assistants that support the Model Context Protocol (MCP). When properly configured, the AI assistant can use the following tools:

Scrape Tool

The scrape tool allows the AI to extract content from websites. It supports various formats:

  • markdown: Returns the content in markdown format
  • html: Returns the content in HTML format
  • screenshot: Returns a screenshot of the webpage
  • json: Returns structured data in JSON format

Example prompt for the AI:

Can you scrape the latest news from techcrunch.com and summarize it for me?

API Reference

Tools

scrape

Scrapes content from a webpage based on a prompt.

Parameters:

  • prompt (string): The prompt describing what to scrape, including the URL
  • format (string): The format to return the content in (markdown, html, screenshot, json, quick)

Example:

{
  "prompt": "Get me the top 10 products from producthunt.com",
  "format": "markdown"
}

Development

Setup

  1. Clone the repository:

    git clone https://github.com/Ai-Quill/scraperis-mcp.git
    cd scraperis-mcp
    
  2. Install dependencies:

    npm install
    
  3. Build the project:

    npm run build
    

Scripts

  • npm run build: Build the project
  • npm run watch: Watch for changes and rebuild
  • npm run dev: Run with MCP Inspector for development
  • npm run test: Run tests
  • npm run lint: Run ESLint

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

Alternatives

Related Skills

Browse all skills
google-official-seo-guide

Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation

119
ux-writing

Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards โ€” purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.

31
last30days

Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.

27
browser-automation

Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.

23
seo-optimizer

Search Engine Optimization specialist for content strategy, technical SEO, keyword research, and ranking improvements. Use when optimizing website content, improving search rankings, conducting keyword analysis, or implementing SEO best practices. Expert in on-page SEO, meta tags, schema markup, and Core Web Vitals.

21
web-research

Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research

19