
Web Fetch
Fetches web pages and converts them to markdown format while extracting image URLs. Includes proxy support for corporate networks and restricted environments.
Fetches and converts web pages to markdown format with automatic image extraction and proxy support for accessing content through corporate networks or restricted environments.
What it does
- Fetch web pages and convert to markdown
- Extract image URLs from web content
- Route requests through HTTP/HTTPS proxies
- Access content through corporate firewalls
Best for
About Web Fetch
Web Fetch is a community-built MCP server published by kwp-lab that provides AI assistants with tools and capabilities via the Model Context Protocol. Web Fetch is a web scraping tool that converts web pages to markdown, extracts images, and works with proxies for secure It is categorized under search web.
How to install
You can install Web Fetch in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Web Fetch is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
MCP Fetch
Model Context Protocol server for fetching web content with custom http proxy. This allows Claude Desktop (or any MCP client) to fetch web content and handle images appropriately.
This repository forks from the @smithery/mcp-fetch and replaces the node-fetch implementation with the library node-fetch-native.
The server will use the http_proxy and https_proxy environment variables to route requests through the proxy server by default if they are set.
You also can set the MCP_HTTP_PROXY environment variable to use a different proxy server.
Available Tools
fetch: Retrieves URLs from the Internet and extracts their content as markdown. If images are found, their URLs will be included in the response.
Image Processing Specifications:
Only extract image urls from the article content, and append them to the tool result:
{
"params": {
"url": "https://www.example.com/articles/123"
},
"response": {
"content": [
{
"type": "text",
"text": "Contents of https://www.example.com/articles/123:\nHere is the article content\n\nImages found in article:\n- https://www.example.com/1.jpg.webp\n- https://www.example.com/2.jpg.webp\n- https://www.example.com/3.webp"
}
]
}
}
Quick Start (For Users)
To use this tool with Claude Desktop, simply add the following to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"tools": {
"fetch": {
"command": "npx",
"args": ["-y", "@kwp-lab/mcp-fetch"],
"env": {
"MCP_HTTP_PROXY": "https://example.com:10890" // Optional, remove if not needed
}
}
}
}
This will automatically download and run the latest version of the tool when needed.
Required Setup
- Enable Accessibility for Claude:
- Open System Settings
- Go to Privacy & Security > Accessibility
- Click the "+" button
- Add Claude from your Applications folder
- Turn ON the toggle for Claude
For Developers
The following sections are for those who want to develop or modify the tool.
Prerequisites
- Node.js 18+
- Claude Desktop (install from https://claude.ai/desktop)
- tsx (install via
npm install -g tsx)
Installation
Installing via Smithery
To install MCP Fetch for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @kwp-lab/mcp-fetch --client claude
Manual Installation
git clone https://github.com/kwp-lab/mcp-fetch.git
cd mcp-fetch
npm install
npm run build
Configuration
-
Make sure Claude Desktop is installed and running.
-
Install tsx globally if you haven't:
npm install -g tsx # or pnpm add -g tsx -
Modify your Claude Desktop config located at:
~/Library/Application Support/Claude/claude_desktop_config.json
You can easily find this through the Claude Desktop menu:
- Open Claude Desktop
- Click Claude on the Mac menu bar
- Click "Settings"
- Click "Developer"
Add the following to your MCP client's configuration:
{
"tools": {
"fetch": {
"args": ["tsx", "/path/to/mcp-fetch/index.ts"]
}
}
}
Alternatives
Related Skills
Browse all skillsManage Zotero reference libraries via the Web API. Search, list, add items by DOI/ISBN/PMID (with duplicate detection), delete/trash items, update metadata and tags, export in BibTeX/RIS/CSL-JSON, batch-add from files, check PDF attachments, cross-reference citations, find missing DOIs via CrossRef, and fetch open-access PDFs. Supports --json output for scripting. Use when the user asks about academic references, citation management, literature libraries, PDFs for papers, bibliography export, or Zotero specifically.
Fetch content from Reddit using Gemini CLI when WebFetch is blocked. Use when accessing Reddit URLs, researching topics on Reddit, or when Reddit returns 403/blocked errors.
Search the web, scrape websites, extract structured data from URLs, and automate browsers using Bright Data's Web MCP. Use when fetching live web content, bypassing blocks/CAPTCHAs, getting product data from Amazon/eBay, social media posts, or when standard requests fail.
This skill should be used when analyzing recent market-moving news events and their impact on equity markets and commodities. Use this skill when the user requests analysis of major financial news from the past 10 days, wants to understand market reactions to monetary policy decisions (FOMC, ECB, BOJ), needs assessment of geopolitical events' impact on commodities, or requires comprehensive review of earnings announcements from mega-cap stocks. The skill automatically collects news using WebSearch/WebFetch tools and produces impact-ranked analysis reports. All analysis thinking and output are conducted in English.
This skill should be used when user asks to "search the web", "fetch content from URL", "extract page content", "use Tavily search", "scrape this website", "get information from this link", or "web search for X".
Turn recipes into a Todoist Shopping list. Extract ingredients from recipe photos (Gemini Flash vision) or recipe web pages (search + fetch), then compare against the existing Shopping project with conservative synonym/overlap rules, skip pantry staples (salt/pepper), and sum quantities when units match. Also saves each cooked recipe into the workspace cookbook (recipes/).