
Fetch JSONPath
Fetches data from web APIs and extracts only the specific JSON fields you need using JSONPath patterns, reducing token usage by up to 99%. Supports batch processing and concurrent requests.
Fetches and extracts data from HTTP endpoints using JSONPath patterns with support for batch processing, custom headers, proxy configuration, and concurrent operations for efficient web API data retrieval.
What it does
- Extract specific JSON fields from APIs using JSONPath patterns
- Fetch text content from web pages with HTML-to-Markdown conversion
- Process multiple URLs concurrently in batch operations
- Configure custom headers and proxy settings for requests
- Filter and transform JSON data with arithmetic operations
- Use extended JSONPath functions like len, keys, and filtering
Best for
About Fetch JSONPath
Fetch JSONPath is a community-built MCP server published by ackness that provides AI assistants with tools and capabilities via the Model Context Protocol. Fetch JSONPath retrieves and extracts data from HTTP APIs using JSONPath, supporting batch processing, custom headers, p It is categorized under search web. This server exposes 4 tools that AI clients can invoke during conversations and coding sessions.
How to install
You can install Fetch JSONPath in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Fetch JSONPath is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (4)
Extract JSON content from a URL using JSONPath with extended features. Supports extensions like len, keys, filtering, arithmetic operations, and more. If 'pattern' is omitted or empty, the entire JSON document is returned. Supports different HTTP methods (default: GET).
Fetch text content from a URL using various HTTP methods. Defaults to converting HTML to Markdown format.
Batch extract JSON content from multiple URLs with different extended JSONPath patterns. Supports all JSONPath extensions and optimizes by fetching each unique request only once. Executes requests concurrently for better performance. Supports different HTTP methods.
Batch fetch raw text content from multiple URLs using various HTTP methods. Executes requests concurrently for better performance.
Fetch JSONPath MCP
A Model Context Protocol (MCP) server that provides tools for fetching JSON data and web content from URLs. Features intelligent content extraction, multiple HTTP methods, and browser-like headers for reliable web scraping.
🎯 Why Use This?
Reduce LLM Token Usage & Hallucination - Instead of fetching entire JSON responses and wasting tokens, extract only the data you need.
Traditional Fetch vs JSONPath Extract
❌ Traditional fetch (wasteful):
// API returns 2000+ tokens
{
"data": [
{
"id": 1,
"name": "Alice",
"email": "[email protected]",
"avatar": "https://...",
"profile": {
"bio": "Long bio text...",
"settings": {...},
"preferences": {...},
"metadata": {...}
},
"posts": [...],
"followers": [...],
"created_at": "2023-01-01",
"updated_at": "2024-01-01"
},
// ... 50 more users
],
"pagination": {...},
"meta": {...}
}
✅ JSONPath extract (efficient):
// Only 10 tokens - exactly what you need!
["Alice", "Bob", "Charlie"]
Using pattern: data[*].name saves 99% tokens and eliminates model hallucination from irrelevant data.
Installation
For most IDEs, use the uvx tool to run the server.
{
"mcpServers": {
"fetch-jsonpath-mcp": {
"command": "uvx",
"args": [
"fetch-jsonpath-mcp"
]
}
}
}
Install in Claude Code
claude mcp add fetch-jsonpath-mcp -- uvx fetch-jsonpath-mcp
Install in Cursor
{
"mcpServers": {
"fetch-jsonpath-mcp": {
"command": "uvx",
"args": ["fetch-jsonpath-mcp"]
}
}
}
Install in Windsurf
Add this to your Windsurf MCP config file. See Windsurf MCP docs for more info.
Windsurf Local Server Connection
{
"mcpServers": {
"fetch-jsonpath-mcp": {
"command": "uvx",
"args": ["fetch-jsonpath-mcp"]
}
}
}
Install in VS Code
"mcp": {
"servers": {
"fetch-jsonpath-mcp": {
"type": "stdio",
"command": "uvx",
"args": ["fetch-jsonpath-mcp"]
}
}
}
Development Setup
1. Install Dependencies
uv sync
2. Start Demo Server (Optional)
# Install demo server dependencies
uv add fastapi uvicorn
# Start demo server on port 8080
uv run demo-server
3. Run MCP Server
uv run fetch-jsonpath-mcp
Demo Server Data
The demo server at http://localhost:8080 returns:
{
"foo": [{"baz": 1, "qux": "a"}, {"baz": 2, "qux": "b"}],
"bar": {
"items": [10, 20, 30],
"config": {"enabled": true, "name": "example"}
},
"metadata": {"version": "1.0.0"}
}
Available Tools
fetch-json
Extract JSON data using JSONPath patterns with support for all HTTP methods.
{
"name": "fetch-json",
"arguments": {
"url": "http://localhost:8080",
"pattern": "foo[*].baz",
"method": "GET"
}
}
Returns: [1, 2]
Parameters:
url(required): Target URLpattern(optional): JSONPath pattern for data extractionmethod(optional): HTTP method (GET, POST, PUT, DELETE, etc.) - Default: "GET"data(optional): Request body for POST/PUT requestsheaders(optional): Additional HTTP headers
fetch-text
Fetch web content with intelligent text extraction. Defaults to Markdown format for better readability.
{
"name": "fetch-text",
"arguments": {
"url": "http://localhost:8080",
"output_format": "clean_text"
}
}
Returns: Clean text representation of the JSON data
Output Formats:
"markdown"(default): Converts HTML to clean Markdown format"clean_text": Pure text with HTML tags removed"raw_html": Original HTML content
Parameters:
url(required): Target URLmethod(optional): HTTP method - Default: "GET"data(optional): Request body for POST/PUT requestsheaders(optional): Additional HTTP headersoutput_format(optional): Output format - Default: "markdown"
batch-fetch-json
Process multiple URLs with different JSONPath patterns concurrently.
{
"name": "batch-fetch-json",
"arguments": {
"requests": [
{"url": "http://localhost:8080", "pattern": "foo[*].baz"},
{"url": "http://localhost:8080", "pattern": "bar.items[*]"}
]
}
}
Returns: [{"url": "http://localhost:8080", "pattern": "foo[*].baz", "success": true, "content": [1, 2]}, {"url": "http://localhost:8080", "pattern": "bar.items[*]", "success": true, "content": [10, 20, 30]}]
Request Object Parameters:
url(required): Target URLpattern(optional): JSONPath patternmethod(optional): HTTP method - Default: "GET"data(optional): Request bodyheaders(optional): Additional HTTP headers
batch-fetch-text
Fetch content from multiple URLs with intelligent text extraction.
{
"name": "batch-fetch-text",
"arguments": {
"requests": [
"http://localhost:8080",
{"url": "http://localhost:8080", "output_format": "raw_html"}
],
"output_format": "markdown"
}
}
Returns: [{"url": "http://localhost:8080", "success": true, "content": "# Demo Server Data\n\n..."}, {"url": "http://localhost:8080", "success": true, "content": "{\"foo\": [{\"baz\": 1, \"qux\": \"a\"}, {\"baz\": 2, \"qux\": \"b\"}]..."}]
Supports:
- Simple URL strings
- Full request objects with custom methods and headers
- Mixed input types in the same batch
JSONPath Examples
This project uses jsonpath-ng for JSONPath implementation.
| Pattern | Result | Description |
|---|---|---|
foo[*].baz | [1, 2] | Get all baz values |
bar.items[*] | [10, 20, 30] | Get all items |
metadata.version | ["1.0.0"] | Get version |
For complete JSONPath syntax reference, see the jsonpath-ng documentation.
🚀 Performance Benefits
- Token Efficiency: Extract only needed data, not entire JSON responses
- Faster Processing: Smaller payloads = faster LLM responses
- Reduced Hallucination: Less irrelevant data = more accurate outputs
- Cost Savings: Fewer tokens = lower API costs
- Better Focus: Clean data helps models stay on task
- Smart Headers: Default browser headers prevent blocking and improve access
- Markdown Conversion: Clean, readable format that preserves structure
Configuration
Set environment variables to customize behavior:
# Request timeout in seconds (default: 10.0)
export JSONRPC_MCP_TIMEOUT=30
# SSL verification (default: true)
export JSONRPC_MCP_VERIFY=false
# Follow redirects (default: true)
export JSONRPC_MCP_FOLLOW_REDIRECTS=true
# Custom headers (will be merged with default browser headers)
export JSONRPC_MCP_HEADERS='{"Authorization": "Bearer token"}'
# HTTP proxy configuration
export JSONRPC_MCP_PROXY="http://proxy.example.com:8080"
Default Browser Headers: The server automatically includes realistic browser headers to prevent blocking:
- User-Agent: Chrome browser simulation
- Accept: Standard browser content types
- Accept-Language, Accept-Encoding: Browser defaults
- Security headers: Sec-Fetch-* headers for modern browsers
Custom headers in JSONRPC_MCP_HEADERS will override defaults when there are conflicts.
Development
# Run tests
pytest
# Check code quality
ruff check --fix
# Build and test locally
uv build
What's New in v1.1.0
- ✨ Multi-Method HTTP Support: GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS
- 🔄 Tool Renaming:
get-json→fetch-json,get-text→fetch-text - 📄 Markdown Conversion: Default HTML to Markdown conversion with
markdownify - 🌐 Smart Browser Headers: Automatic browser simulation headers
- 🎛️ Format Control: Three output formats for text content (markdown, clean_text, raw_html)
- 🚀 Enhanced Batch Processing: Support for different methods in batch operations
Alternatives
Related Skills
Browse all skillsManage Zotero reference libraries via the Web API. Search, list, add items by DOI/ISBN/PMID (with duplicate detection), delete/trash items, update metadata and tags, export in BibTeX/RIS/CSL-JSON, batch-add from files, check PDF attachments, cross-reference citations, find missing DOIs via CrossRef, and fetch open-access PDFs. Supports --json output for scripting. Use when the user asks about academic references, citation management, literature libraries, PDFs for papers, bibliography export, or Zotero specifically.
Fetch content from Reddit using Gemini CLI when WebFetch is blocked. Use when accessing Reddit URLs, researching topics on Reddit, or when Reddit returns 403/blocked errors.
Search the web, scrape websites, extract structured data from URLs, and automate browsers using Bright Data's Web MCP. Use when fetching live web content, bypassing blocks/CAPTCHAs, getting product data from Amazon/eBay, social media posts, or when standard requests fail.
This skill should be used when analyzing recent market-moving news events and their impact on equity markets and commodities. Use this skill when the user requests analysis of major financial news from the past 10 days, wants to understand market reactions to monetary policy decisions (FOMC, ECB, BOJ), needs assessment of geopolitical events' impact on commodities, or requires comprehensive review of earnings announcements from mega-cap stocks. The skill automatically collects news using WebSearch/WebFetch tools and produces impact-ranked analysis reports. All analysis thinking and output are conducted in English.
This skill should be used when user asks to "search the web", "fetch content from URL", "extract page content", "use Tavily search", "scrape this website", "get information from this link", or "web search for X".
Turn recipes into a Todoist Shopping list. Extract ingredients from recipe photos (Gemini Flash vision) or recipe web pages (search + fetch), then compare against the existing Shopping project with conservative synonym/overlap rules, skip pantry staples (salt/pepper), and sum quantities when units match. Also saves each cooked recipe into the workspace cookbook (recipes/).