
Web Browser
Provides web browsing capabilities for AI assistants to visit websites, extract content, and retrieve real-time data from the web.
Integrates web browsing capabilities for realtime data retrieval, content extraction, and task automation using popular Python libraries.
What it does
- Browse websites and web pages
- Extract content using CSS selectors
- Retrieve real-time data from web sources
- Capture page metadata and links
- Automate web-based tasks
Best for
About Web Browser
Web Browser is a community-built MCP server published by blazickjp that provides AI assistants with tools and capabilities via the Model Context Protocol. Integrate Python web scraping to automate tasks, extract content, and scrape websites in realtime using advanced Python It is categorized under browser automation, search web.
How to install
You can install Web Browser in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Web Browser is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
✨ Features
🌐 Enable AI assistants to browse and extract content from the web through a simple MCP interface.
The Web Browser MCP Server provides AI models with the ability to browse websites, extract content, and understand web pages through the Message Control Protocol (MCP). It enables smart content extraction with CSS selectors and robust error handling.
🤝 Contribute • 📝 Report Bug
✨ Core Features
- 🎯 Smart Content Extraction: Target exactly what you need with CSS selectors
- ⚡ Lightning Fast: Built with async processing for optimal performance
- 📊 Rich Metadata: Capture titles, links, and structured content
- 🛡️ Robust & Reliable: Built-in error handling and timeout management
- 🌍 Cross-Platform: Works everywhere Python runs
🚀 Quick Start
Installing via Smithery
To install Web Browser Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install web-browser-mcp-server --client claude
Installing Manually
Install using uv:
uv tool install web-browser-mcp-server
For development:
# Clone and set up development environment
git clone https://github.com/blazickjp/web-browser-mcp-server.git
cd web-browser-mcp-server
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install with test dependencies
uv pip install -e ".[test]"
🔌 MCP Integration
Add this configuration to your MCP client config file:
{
"mcpServers": {
"web-browser-mcp-server": {
"command": "uv",
"args": [
"tool",
"run",
"web-browser-mcp-server"
],
"env": {
"REQUEST_TIMEOUT": "30"
}
}
}
}
For Development:
{
"mcpServers": {
"web-browser-mcp-server": {
"command": "uv",
"args": [
"--directory",
"path/to/cloned/web-browser-mcp-server",
"run",
"web-browser-mcp-server"
],
"env": {
"REQUEST_TIMEOUT": "30"
}
}
}
}
💡 Available Tools
The server provides a powerful web browsing tool:
browse_webpage
Browse and extract content from web pages with optional CSS selectors:
# Basic webpage fetch
result = await call_tool("browse_webpage", {
"url": "https://example.com"
})
# Target specific content with CSS selectors
result = await call_tool("browse_webpage", {
"url": "https://example.com",
"selectors": {
"headlines": "h1, h2",
"main_content": "article.content",
"navigation": "nav a"
}
})
⚙️ Configuration
Configure through environment variables:
| Variable | Purpose | Default |
|---|---|---|
REQUEST_TIMEOUT | Webpage request timeout in seconds | 30 |
🧪 Testing
Run the test suite:
python -m pytest
📄 License
Released under the MIT License. See the LICENSE file for details.
Alternatives
Related Skills
Browse all skillsAutomate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.
Unblock websites and bypass CAPTCHAs and 403 errors using Aluvia mobile proxies. Enables web search and content extraction without browser automation.
Unblock websites and bypass CAPTCHAs and 403 errors using Aluvia mobile proxies. Enables web search and content extraction without browser automation.
Browser automation with persistent page state. Use when users ask to navigate websites, fill forms, take screenshots, extract web data, test web apps, or automate browser workflows. Trigger phrases include "go to [url]", "click on", "fill out the form", "take a screenshot", "scrape", "automate", "test the website", "log into", or any browser interaction request.
Browser automation, debugging, and performance analysis using Puppeteer CLI scripts. Use for automating browsers, taking screenshots, analyzing performance, monitoring network traffic, web scraping, form automation, and JavaScript debugging.
"Browser automation QA testing skill. Systematically tests web applications for functionality, security, and usability issues. Reports findings by severity (CRITICAL/HIGH/MEDIUM/LOW) with immediate alerts for critical failures."