Scrapeless (Google Search)

Scrapeless (Google Search)

Official
scrapeless-ai

Connects AI models to Google Search through the Scrapeless API, allowing programmatic search queries with customizable parameters like location and language.

Provides a bridge to the Scrapeless API for performing Google searches with customizable parameters including query text, country code, and language preferences.

156388 views25Local (stdio)

What it does

  • Perform Google searches with custom queries
  • Filter results by country and language
  • Extract search result titles and summaries
  • Retrieve search result URLs
  • Access real-time search data

Best for

AI research assistants gathering web informationAutomated content research workflowsBuilding search-powered AI applications
Bypasses blocking and rate limitsCustomizable search parameters

About Scrapeless (Google Search)

Scrapeless (Google Search) is an official MCP server published by scrapeless-ai that provides AI assistants with tools and capabilities via the Model Context Protocol. Access the Scrapeless Google Search API for customizable queries by text, country, or language. Easily integrate with cu It is categorized under search web.

How to install

You can install Scrapeless (Google Search) in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Scrapeless (Google Search) is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

preview

Scrapeless MCP Server

Welcome to the official Scrapeless Model Context Protocol (MCP) Server — a powerful integration layer that empowers LLMs, AI Agents, and AI applications to interact with the web in real time.

Built on the open MCP standard, Scrapeless MCP Server seamlessly connects models like ChatGPT, Claude, and tools like Cursor and Windsurf to a wide range of external capabilities, including:

  • Google services integration (Search, Trends)
  • Browser automation for page-level navigation and interaction
  • Scrape dynamic, JS-heavy sites—export as HTML, Markdown, or screenshots

Whether you're building an AI research assistant, a coding copilot, or autonomous web agents, this server provides the dynamic context and real-world data your workflows need—without getting blocked.

Usage Examples

  1. Automated Web Interaction and Data Extraction with Claude

Using Scrapeless MCP Browser, Claude can perform complex tasks such as web navigation, clicking, scrolling, and scraping through conversational commands, with real-time preview of web interaction results via live sessions.

preview

  1. Bypassing Cloudflare to Retrieve Target Page Content

Using the Scrapeless MCP Browser service, the Cloudflare page is automatically accessed, and after the process is completed, the page content is extracted and returned in Markdown format.

preview

  1. Extracting Dynamically Rendered Page Content and Writing to File

Using the Scrapeless MCP Universal API, the JavaScript-rendered content of the target page above is scraped, exported in Markdown format, and finally written to a local file named text.md.

preview

  1. Automated SERP Scraping

Using the Scrapeless MCP Server, query the keyword “web scraping” on Google Search, retrieve the first 10 search results (including title, link, and summary), and write the content to the file named serp.text.

preview

Here are some additional examples of how to use these servers:

Example
Search scrapeless by Google search.
Find the search interest for "AI" over the last year.
Use a browser to visit chatgpt.com, search for "What's the weather like today?", and summarize the results.
Scrape the HTML content of scrapeless.com page.
Scrape the Markdown content of scrapeless.com page.
Get screenshots of scrapeless.com.

Setup Guide

  1. Get Scrapeless Key
  • Log in to the Scrapeless Dashboard(Free trial available)
  • Then click "Setting" on the left -> select "API Key Management" -> click "Create API Key". Finally, click the API Key you created to copy it.

preview

  1. Configure Your MCP Client

Scrapeless MCP Server supports both Stdio and Streamable HTTP transport modes.

🖥️ Stdio (Local Execution)

{
  "mcpServers": {
    "Scrapeless MCP Server": {
      "command": "npx",
      "args": ["-y", "scrapeless-mcp-server"],
      "env": {
        "SCRAPELESS_KEY": "YOUR_SCRAPELESS_KEY"
      }
    }
  }
}

🌐 Streamable HTTP (Hosted API Mode)

{
  "mcpServers": {
    "Scrapeless MCP Server": {
      "type": "streamable-http",
      "url": "https://api.scrapeless.com/mcp",
      "headers": {
        "x-api-token": "YOUR_SCRAPELESS_KEY"
      },
      "disabled": false,
      "alwaysAllow": []
    }
  }
}

Advanced Options

Customize browser session behavior with optional parameters. These can be set via environment variables (for Stdio) or HTTP headers (for Streamable HTTP):

Stdio (Env Var)Streamable HTTP (HTTP Header)Description
BROWSER_PROFILE_IDx-browser-profile-idSpecifies a reusable browser profile ID for session continuity.
BROWSER_PROFILE_PERSISTx-browser-profile-persistEnables persistent storage for cookies, local storage, etc.
BROWSER_SESSION_TTLx-browser-session-ttlDefines the maximum session timeout in seconds. The session will automatically expire after this duration of inactivity.

Integration with Claude Desktop

  1. Open Claude Desktop
  2. Navigate to: SettingsToolsMCP Servers
  3. Click "Add MCP Server"
  4. Paste either the Stdio or Streamable HTTP config above
  5. Save and enable the server
  6. Claude will now be able to issue web queries, extract content, and interact with pages using Scrapeless

Integration with Cursor IDE

  1. Open Cursor
  2. Press Cmd + Shift + P and search for: Configure MCP Servers
  3. Add the Scrapeless MCP config using the format above
  4. Save the file and restart Cursor (if needed)
  5. Now you can ask Cursor things like:
    1. "Search StackOverflow for a solution to this error"
    2. "Scrape the HTML from this page"
  6. And it will use Scrapeless in the background.

Supported MCP Tools

NameDescription
google_searchUniversal information search engine.
google_trendsGet trending search data from Google Trends.
browser_createCreate or reuse a cloud browser session using Scrapeless.
browser_closeCloses the current session by disconnecting the cloud browser.
browser_gotoNavigate browser to a specified URL.
browser_go_backGo back one step in browser history.
browser_go_forwardGo forward one step in browser history.
browser_clickClick a specific element on the page.
browser_typeType text into a specified input field.
browser_press_keySimulate a key press.
browser_wait_forWait for a specific page element to appear.
browser_waitPause execution for a fixed duration.
browser_screenshotCapture a screenshot of the current page.
browser_get_htmlGet the full HTML of the current page.
browser_get_textGet all visible text from the current page.
browser_scrollScroll to the bottom of the page.
browser_scroll_toScroll a specific element into view.
scrape_htmlScrape a URL and return its full HTML content.
scrape_markdownScrape a URL and return its content as Markdown.
scrape_screenshotCapture a high-quality screenshot of any webpage.

Security Best Practices

When using Scrapeless MCP Server with LLMs (like ChatGPT, Claude, or Cursor), it's critical to handle all scraped or extracted web content with care. Web data is untrusted by default, and improper handling may expose your application to prompt injection or other security vulnerabilities.

✅ Recommended Practices

  • Never pass raw scraped content directly into LLM prompts. Raw HTML, JavaScript, or user-generated text may contain hidden injection payloads.
  • Sanitize and validate all extracted content. Strip or escape potentially harmful tags and scripts before using content in downstream logic or AI models.
  • Prefer structured extraction over free-form text. Use tools like scrape_html, scrape_markdown, or targeted browser_get_text with known-safe selectors to extract only the content you trust.
  • Apply domain or selector whitelisting when scraping dynamically generated pages, to restrict data flow to known and trusted sources.
  • Log and monitor all outbound requests made via browser or scraping tools, especially if you're handling sensitive data, tokens, or internal network access.

🚫 Avoid

  • Injecting scraped HTML directly into prompts
  • Letting users specify arbitrary URLs or CSS selectors without validation
  • Storing unfiltered scraped content for future prompt usage

Community

Contact Us

For questions, suggestions, or collaboration inquiries, feel free to contact us via:

Alternatives

Related Skills

Browse all skills
google-official-seo-guide

Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation

80
openclaw-serper

Searches Google and extracts full page content from every result via trafilatura. Returns clean readable text, not just snippets. Use when the user needs web search, research, current events, news, factual lookups, product comparisons, technical documentation, or any question requiring up-to-date information from the internet.

0
google-search

Search the web using Google Custom Search Engine (PSE). Use this when you need live information, documentation, or to research topics and the built-in web_search is unavailable.

0
brightdata

Web scraping and search via Bright Data API. Requires BRIGHTDATA_API_KEY and BRIGHTDATA_UNLOCKER_ZONE. Use for scraping any webpage as markdown (bypassing bot detection/CAPTCHA) or searching Google with structured results.

0
ga4-analytics

Google Analytics 4, Search Console, and Indexing API toolkit. Analyze website traffic, page performance, user demographics, real-time visitors, search queries, and SEO metrics. Use when the user asks to: check site traffic, analyze page views, see traffic sources, view user demographics, get real-time visitor data, check search console queries, analyze SEO performance, request URL re-indexing, inspect index status, compare date ranges, check bounce rates, view conversion data, or get e-commerce revenue. Requires a Google Cloud service account with GA4 and Search Console access.

0
google

Search the web for information. Use when you need to look something up, find current information, or research a topic.

0