AnyCrawl

AnyCrawl

Official
any4ai

Scrapes web pages and crawls entire websites using the AnyCrawl API, extracting content in multiple formats with configurable depth limits.

Integrates with the AnyCrawl API to provide web scraping and crawling capabilities with configurable depth limits, multiple scraping engines, and structured data extraction in various formats including markdown and JSON.

5446 views2Local (stdio)

What it does

  • Scrape individual web pages for content
  • Crawl entire websites with depth control
  • Search web and scrape results
  • Extract data as markdown, JSON, HTML, or screenshots
  • Monitor async crawling job status
  • Choose between Playwright, Cheerio, or Puppeteer engines

Best for

Data analysts gathering web content for researchDevelopers building content aggregation systemsSEO professionals auditing website structuresResearchers collecting structured web data
Cloud service available — no server setupMultiple scraping engines supportedAsync crawling with status monitoring

About AnyCrawl

AnyCrawl is an official MCP server published by any4ai that provides AI assistants with tools and capabilities via the Model Context Protocol. AnyCrawl offers advanced web scraping and internet scraping with flexible depth limits. Scrape any website and extract s It is categorized under search web.

How to install

You can install AnyCrawl in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

AnyCrawl is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

AnyCrawl MCP Server

🚀 AnyCrawl MCP Server — Powerful web scraping and crawling for Cursor, Claude, and other LLM clients via the Model Context Protocol (MCP).

Features

  • Web Scraping: Extract content from single URLs with multiple output formats
  • Website Crawling: Crawl entire websites with configurable depth and limits
  • Search Engine Integration: Search the web and optionally scrape results
  • Multiple Engines: Support for Playwright, Cheerio, and Puppeteer
  • Flexible Output: Markdown, HTML, text, screenshots, and structured JSON
  • Async Operations: Non-blocking crawl jobs with status monitoring
  • Error Handling: Robust error handling and logging
  • Multiple Modes: STDIO (default), MCP(HTTP), SSE; cloud-ready with Nginx proxy

Installation

Running with npx

ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

Manual installation

npm install -g anycrawl-mcp-server

ANYCRAWL_API_KEY=YOUR-API-KEY anycrawl-mcp

Configuration

AnyCrawl MCP Server supports two deployment modes: Cloud Service (recommended) and Self-Hosted.

Cloud Service (Recommended)

Use the AnyCrawl cloud service at mcp.anycrawl.dev. No server setup required.

# Only need your API key
export ANYCRAWL_API_KEY="your-api-key-here"

Cloud endpoints:

  • MCP (streamable_http): https://mcp.anycrawl.dev/{API_KEY}/mcp
  • SSE: https://mcp.anycrawl.dev/{API_KEY}/sse

Self-Hosted Deployment

For self-hosted deployments, configure the base URL to point to your own AnyCrawl API instance:

export ANYCRAWL_API_KEY="your-api-key-here"
export ANYCRAWL_BASE_URL="https://your-api-server.com"  # Your self-hosted API URL

For local development with custom host/port:

export ANYCRAWL_HOST="127.0.0.1"  # Default: mcp.anycrawl.dev (cloud)
export ANYCRAWL_PORT="3000"       # Default: 3000

Get your API key

  • Visit the AnyCrawl website and sign up or log in: AnyCrawl
  • 🎉 Sign up for free to receive 1,500 credits — enough to crawl nearly 1,500 pages.
  • Open the dashboard → API Keys → Copy your key.
  • Copy the key and set it as the ANYCRAWL_API_KEY environment variable (see above).

Usage

Available Modes

AnyCrawl MCP Server supports the following deployment modes:

Default mode is STDIO (no env needed). Set ANYCRAWL_MODE to switch.

ModeDescriptionBest ForTransport
STDIOStandard MCP over stdio (default)Command-type MCP clients, local toolingstdio
MCPStreamable HTTP (JSON, stateful)Cursor (streamable_http), API integrationHTTP + JSON
SSEServer-Sent EventsWeb apps, browser integrationsHTTP + SSE

Quick Start Commands

# Development (local)
npm run dev            # STDIO (default)
npm run dev:mcp          # MCP mode (JSON /mcp)
npm run dev:sse          # SSE mode (/sse)

# Production (built output)
npm start              # STDIO (default)
npm run start:mcp
npm run start:sse

# Env examples
ANYCRAWL_MODE=MCP ANYCRAWL_API_KEY=YOUR-KEY npm run dev:mcp
ANYCRAWL_MODE=SSE ANYCRAWL_API_KEY=YOUR-KEY npm run dev:sse

Docker Compose (MCP + SSE with Nginx)

This repo ships a production-ready image that runs MCP (JSON) on port 3000 and SSE on port 3001 in the same container, fronted by Nginx. Nginx also supports API-key-prefixed paths /{API_KEY}/mcp and /{API_KEY}/sse and forwards the key via x-anycrawl-api-key header.

docker compose build
docker compose up -d

Environment variables used in Docker image:

  • ANYCRAWL_MODE: MCP_AND_SSE (default in compose), or MCP, SSE
  • ANYCRAWL_MCP_PORT: default 3000
  • ANYCRAWL_SSE_PORT: default 3001
  • CLOUD_SERVICE: true to extract API key from /{API_KEY}/... or headers
  • ANYCRAWL_BASE_URL: default https://api.anycrawl.dev

Running on Cursor

Configuring Cursor. Note: Requires Cursor v0.45.6+.

For Cursor v0.48.6 and newer, add this to your MCP Servers settings:

{
  "mcpServers": {
    "anycrawl-mcp": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR-API-KEY"
      }
    }
  }
}

For Cursor v0.45.6:

  1. Open Cursor Settings → Features → MCP Servers → "+ Add New MCP Server"
  2. Name: "anycrawl-mcp" (or your preferred name)
  3. Type: "command"
  4. Command:
env ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

On Windows, if you encounter issues:

cmd /c "set ANYCRAWL_API_KEY=YOUR-API-KEY && npx -y anycrawl-mcp"

Running on VS Code

For manual installation, add this JSON to your User Settings (JSON) in VS Code (Command Palette → Preferences: Open User Settings (JSON)):

{
  "mcp": {
    "inputs": [
      {
        "type": "promptString",
        "id": "apiKey",
        "description": "AnyCrawl API Key",
        "password": true
      }
    ],
    "servers": {
      "anycrawl": {
        "command": "npx",
        "args": ["-y", "anycrawl-mcp"],
        "env": {
          "ANYCRAWL_API_KEY": "${input:apiKey}"
        }
      }
    }
  }
}

Optionally, place the following in .vscode/mcp.json in your workspace to share config:

{
  "inputs": [
    {
      "type": "promptString",
      "id": "apiKey",
      "description": "AnyCrawl API Key",
      "password": true
    }
  ],
  "servers": {
    "anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "${input:apiKey}"
      }
    }
  }
}

Running on Windsurf

Add this to ~/.codeium/windsurf/model_config.json:

{
  "mcpServers": {
    "mcp-server-anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Running on Claude Desktop / Claude Code

Claude Code (CLI)

Add AnyCrawl MCP server using the claude mcp add command:

claude mcp add --transport stdio anycrawl -e ANYCRAWL_API_KEY=your-api-key -- npx -y anycrawl-mcp

Claude Desktop

Add this to your Claude Desktop configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Running with SSE Server Mode

The SSE (Server-Sent Events) mode provides a web-based interface for MCP communication, ideal for web applications, testing, and integration with web-based LLM clients.

Quick Start

ANYCRAWL_MODE=SSE ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

# Or using npm scripts
ANYCRAWL_API_KEY=YOUR-API-KEY npm run dev:sse

Server Configuration

Optional server settings for local/self-hosted deployments:

export ANYCRAWL_PORT=3000           # Default: 3000
export ANYCRAWL_HOST=127.0.0.1      # Set to override cloud default (mcp.anycrawl.dev)

Health Check

curl -s http://localhost:${ANYCRAWL_PORT:-3000}/health
# Response: ok

Generic MCP/SSE Client Configuration

For other MCP/SSE clients that support SSE transport, use this configuration:

{
  "mcpServers": {
    "anycrawl": {
      "type": "sse",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/sse",
      "name": "AnyCrawl MCP Server",
      "description": "Web scraping and crawling tools"
    }
  }
}

or

{
  "mcpServers": {
    "AnyCrawl": {
      "type": "streamable_http",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/mcp"
    }
  }
}

Environment Setup:

# Start SSE server with API key
ANYCRAWL_API_KEY=your-api-key-here npm run dev:sse

Cursor configuration for HTTP modes (streamable_http)

Configure Cursor to connect to your HTTP MCP server.

Local HTTP Streamable Server:

{
  "mcpServers": {
    "anycrawl-http-local": {
      "type": "streamable_http",
      "url": "http://127.0.0.1:3000/mcp"
    }
  }
}

Cloud HTTP Streamable Server:

{
  "mcpServers": {
    "anycrawl-http-cloud": {
      "type": "streamable_http",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/mcp"
    }
  }
}

Note: For HTTP modes, set ANYCRAWL_API_KEY (and optional host/port) in the server process environment or in the URL. Cursor does not need your API key when using streamable_http.

Available Tools

1. Scrape Tool (anycrawl_scrape)

Scrape a single URL and extract content in various formats.

Best for:

  • Extracting content from a single page
  • Quick data extraction
  • Testing specific URLs

Parameters:

  • url (required): The URL to scrape
  • engine (required): Scraping engine (playwright, cheerio, puppeteer)
  • formats (optional): Output formats (markdown, html, text, screenshot, screenshot@fullPage, rawHtml, json)
  • proxy (optional): Proxy URL
  • timeout (optional): Timeout in milliseconds (default: 300000)
  • retry (optional): Whether to retry on failure (default: false)
  • wait_for (optional): Wait time for page to load
  • include_tags (optional): HTML tags to include
  • exclude_tags (optional): HTML tags to exclude
  • json_options (optional): Options for JSON extraction

Example:

{
  "name": "anycrawl_scrape",
  "arguments": {
    "url": "https://example.com",
    "engine": "cheerio",
    "formats": ["markdown", "html"],
    "timeout": 30000
  }
}

2. Crawl Tool (anycrawl_crawl)

Start a crawl job to scrape multiple pages from a website. By default this waits for completion and returns aggregated results using the SDK's client.crawl (defaults: poll every 3 seconds, timeout after 60 seconds).

**B


README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
google-official-seo-guide

Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation

119
ux-writing

Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards — purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.

31
last30days

Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.

27
browser-automation

Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.

23
seo-optimizer

Search Engine Optimization specialist for content strategy, technical SEO, keyword research, and ranking improvements. Use when optimizing website content, improving search rankings, conducting keyword analysis, or implementing SEO best practices. Expert in on-page SEO, meta tags, schema markup, and Core Web Vitals.

21
web-research

Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research

19