
Serper Search and Scrape
Performs Google web searches and extracts content from webpages using the Serper API. Supports advanced search operators and returns structured content including metadata.
Integrates with the Serper API to enable web searches and webpage content extraction, supporting research, content aggregation, and data mining tasks.
What it does
- Search Google with advanced operators (site:, filetype:, date ranges)
- Extract webpage content as plain text or markdown
- Target searches by region and language
- Retrieve metadata and JSON-LD data from pages
- Access knowledge graph and related search results
Best for
About Serper Search and Scrape
Serper Search and Scrape is a community-built MCP server published by marcopesani that provides AI assistants with tools and capabilities via the Model Context Protocol. Integrate Serper Search and Scrape to easily perform web searches and internet scraping for content extraction, research It is categorized under search web.
How to install
You can install Serper Search and Scrape in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Serper Search and Scrape is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Serper Search and Scrape MCP Server
A TypeScript-based MCP server that provides web search and webpage scraping capabilities using the Serper API. This server integrates with Claude Desktop to enable powerful web search and content extraction features.
Features
Tools
-
google_search- Perform web searches via Serper API- Rich search results including organic results, knowledge graph, "people also ask", and related searches
- Supports region and language targeting
- Optional parameters for location, pagination, time filters, and autocorrection
- Supports advanced search operators:
site: Limit results to specific domainfiletype: Limit to specific file types (e.g., 'pdf', 'doc')inurl: Search for pages with word in URLintitle: Search for pages with word in titlerelated: Find similar websitescache: View Google's cached version of a specific URLbefore: Date before in YYYY-MM-DD formatafter: Date after in YYYY-MM-DD formatexact: Exact phrase matchexclude: Terms to exclude from search resultsor: Alternative terms (OR operator)
-
scrape- Extract content from web pages- Get plain text and optional markdown content
- Includes JSON-LD and head metadata
- Preserves document structure
Requirements
- Node.js >= 18
- Serper API key (set as
SERPER_API_KEYenvironment variable)
Development
Install dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
Run tests:
npm test # Run all tests
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage
npm run test:integration # Run integration tests
Environment Variables
Create a .env file in the root directory:
SERPER_API_KEY=your_api_key_here
Debugging
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
Installation
Installing via Smithery
To install Serper Search and Scrape for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @marcopesani/mcp-server-serper --client claude
Claude Desktop
Add the server config at:
- MacOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"serper-search": {
"command": "npx",
"args": ["-y", "serper-search-scrape-mcp-server"],
"env": {
"SERPER_API_KEY": "your_api_key_here"
}
}
}
}
Cline
- Open the Cline extension settings
- Open "MCP Servers" tab
- Click on "Configure MCP Servers"
- Add the server config:
{
"mcpServers": {
"github.com/marcopesani/mcp-server-serper": {
"command": "npx",
"args": ["-y", "serper-search-scrape-mcp-server"],
"env": {
"SERPER_API_KEY": "your_api_key_here"
},
"disabled": false,
"autoApprove": ["google_search", "scrape"]
}
}
}
Additional Cline configuration options:
disabled: Set tofalseto enable the serverautoApprove: List of tools that don't require explicit approval for each use
Cursor
- Open the Cursor settings
- Open "Features" settings
- In the "MCP Servers" section, click on "Add new MCP Server"
- Choose a name, and select "command" as "Type"
- In the "Command" field, enter the following:
env SERPER_API_KEY=your_api_key_here npx -y serper-search-scrape-mcp-server
Docker
You can also run the server using Docker. First, build the image:
docker build -t mcp-server-serper .
Then run the container with your Serper API key:
docker run -e SERPER_API_KEY=your_api_key_here mcp-server-serper
Alternatively, if you have your environment variables in a .env file:
docker run --env-file .env mcp-server-serper
For development, you might want to mount your source code as a volume:
docker run -v $(pwd):/app --env-file .env mcp-server-serper
Note: Make sure to replace your_api_key_here with your actual Serper API key.
Alternatives
Related Skills
Browse all skillsSearch the web, scrape websites, extract structured data from URLs, and automate browsers using Bright Data's Web MCP. Use when fetching live web content, bypassing blocks/CAPTCHAs, getting product data from Amazon/eBay, social media posts, or when standard requests fail.
This skill should be used when user asks to "search the web", "fetch content from URL", "extract page content", "use Tavily search", "scrape this website", "get information from this link", or "web search for X".
Search the web using multiple search engines simultaneously (Bing, Yahoo, Startpage, Aol, Ask) via async-search-scraper, iterating through result pages.
Generates B2B/B2C leads by scraping Google Maps, websites, Instagram, TikTok, Facebook, LinkedIn, YouTube, and Google Search. Use when user asks to find leads, prospects, businesses, build lead lists, enrich contacts, or scrape profiles for sales outreach.
Searches Google and extracts full page content from every result via trafilatura. Returns clean readable text, not just snippets. Use when the user needs web search, research, current events, news, factual lookups, product comparisons, technical documentation, or any question requiring up-to-date information from the internet.
Multi-engine web search with full parameter control. Supports Tavily, Exa, Serper, and SerpAPI with domain filtering, date ranges, deep search, news mode, and content extraction. Auto-selects the best engine based on query type and available API keys. 多引擎精细化搜索:支持域名过滤、日期范围、深度搜索、新闻模式、内容提取。 根据查询类型和可用 API Key 自动选择最优引擎。