
HTTP Request
Makes HTTP requests with realistic browser emulation to bypass anti-bot measures and convert web content to Markdown for LLM processing.
Enables LLMs to make advanced HTTP requests with realistic browser emulation, bypassing anti-bot measures while supporting all HTTP methods, authentication, and automatic response handling for web scraping and API interactions.
What it does
- Make HTTP requests with browser fingerprinting
- Convert HTML and PDF content to Markdown
- Handle authentication (Basic, Bearer, custom)
- Bypass anti-bot detection systems
- Process large responses with token counting
- Support all HTTP methods (GET, POST, PUT, DELETE, etc.)
Best for
About HTTP Request
HTTP Request is a community-built MCP server published by xxxbrian that provides AI assistants with tools and capabilities via the Model Context Protocol. Advanced web scraper lets LLMs bypass anti-bot protection using HTTP requests, ideal for web scraping tools like Octopar It is categorized under browser automation, search web.
How to install
You can install HTTP Request in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
HTTP Request is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
mcp-rquest
A Model Context Protocol (MCP) server that provides advanced HTTP request capabilities for Claude and other LLMs. Built on rquest, this server enables realistic browser emulation with accurate TLS/JA3/JA4 fingerprints, allowing models to interact with websites more naturally and bypass common anti-bot measures. It also supports converting PDF and HTML documents to Markdown for easier processing by LLMs.
Features
- Complete HTTP Methods: Support for GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS, and TRACE
- Browser Fingerprinting: Accurate TLS, JA3/JA4, and HTTP/2 browser fingerprints
- Content Handling:
- Automatic handling of large responses with token counting
- HTML to Markdown conversion for better LLM processing
- PDF to Markdown conversion using the Marker library
- Secure storage of responses in system temporary directories
- Authentication Support: Basic, Bearer, and custom authentication methods
- Request Customization:
- Headers, cookies, redirects
- Form data, JSON payloads, multipart/form-data
- Query parameters
- SSL Security: Uses BoringSSL for secure connections with realistic browser fingerprints
Available Tools
-
HTTP Request Tools:
http_get- Perform GET requests with optional parametershttp_post- Submit data via POST requestshttp_put- Update resources with PUT requestshttp_delete- Remove resources with DELETE requestshttp_patch- Partially update resourceshttp_head- Retrieve only headers from a resourcehttp_options- Retrieve options for a resourcehttp_trace- Diagnostic request tracing
-
Response Handling Tools:
get_stored_response- Retrieve stored large responses, optionally by line rangeget_stored_response_with_markdown- Convert HTML or PDF responses to Markdown format for better LLM processingget_model_state- Get the current state of the PDF models loading processrestart_model_loading- Restart the PDF models loading process if it failed or got stuck
PDF Support
mcp-rquest now supports PDF to Markdown conversion, allowing you to download PDF files and convert them to Markdown format that's easy for LLMs to process:
- Automatic PDF Detection: PDF files are automatically detected based on content type
- Seamless Conversion: The same
get_stored_response_with_markdowntool works for both HTML and PDF files - High-Quality Conversion: Uses the Marker library for accurate PDF to Markdown transformation
- Optimized Performance: Models are pre-downloaded during package installation to avoid delays during request processing
Installation
Using uv (recommended)
When using uv no specific installation is needed. We will
use uvx to directly run mcp-rquest.
Using pip
Alternatively you can install mcp-rquest via pip:
pip install mcp-rquest
After installation, you can run it as a script using:
python -m mcp_rquest
Configuration
Configure for Claude.app
Add to your Claude settings:
Using uvx:
{
"mcpServers": {
"http-rquest": {
"command": "uvx",
"args": ["mcp-rquest"]
}
}
}
Using pip:
{
"mcpServers": {
"http-rquest": {
"command": "python",
"args": ["-m", "mcp_rquest"]
}
}
}
Using pipx:
{
"mcpServers": {
"http-rquest": {
"command": "pipx",
"args": ["run", "mcp-rquest"]
}
}
}
Browser Emulation
mcp-rquest leverages rquest's powerful browser emulation capabilities to provide realistic browser fingerprints, which helps bypass bot detection and access content normally available only to standard browsers. Supported browser fingerprints include:
- Chrome (multiple versions)
- Firefox
- Safari (including iOS and iPad versions)
- Edge
- OkHttp
This ensures that requests sent through mcp-rquest appear as legitimate browser traffic rather than bot requests.
Development
Setting up a Development Environment
- Clone the repository
- Create a virtual environment using uv:
uv venv - Activate the virtual environment:
# Unix/macOS source .venv/bin/activate # Windows .venv\Scripts\activate - Install development dependencies:
uv pip install -e ".[dev]"
Acknowledgements
Alternatives
Related Skills
Browse all skillsBrowser automation with persistent page state. Use when users ask to navigate websites, fill forms, take screenshots, extract web data, test web apps, or automate browser workflows. Trigger phrases include "go to [url]", "click on", "fill out the form", "take a screenshot", "scrape", "automate", "test the website", "log into", or any browser interaction request.
Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.
Browser automation CLI for AI agents. Use when the user needs to interact with websites, including navigating pages, filling forms, clicking buttons, taking screenshots, extracting data, testing web apps, or automating any browser task. Triggers include requests to "open a website", "fill out a form", "click a button", "take a screenshot", "scrape data from a page", "test this web app", "login to a site", "automate browser actions", or any task requiring programmatic web interaction.
Search the web, scrape websites, extract structured data from URLs, and automate browsers using Bright Data's Web MCP. Use when fetching live web content, bypassing blocks/CAPTCHAs, getting product data from Amazon/eBay, social media posts, or when standard requests fail.
Unblock websites and bypass CAPTCHAs and 403 errors using Aluvia mobile proxies. Enables web search and content extraction without browser automation.
Unblock websites and bypass CAPTCHAs and 403 errors using Aluvia mobile proxies. Enables web search and content extraction without browser automation.