
Gemini DeepSearch
Performs automated multi-step web research using Google Search and Gemini AI to generate comprehensive answers with citations. Configurable research depth levels control query generation and iteration loops.
Performs automated multi-step web research using Google Search API and Gemini models to generate diverse search queries, conduct parallel searches, and synthesize comprehensive answers with proper source citations through configurable research depth levels.
What it does
- Generate sophisticated search queries automatically
- Conduct parallel web searches using Google Search API
- Synthesize information from multiple sources
- Identify knowledge gaps and iterate research
- Produce citation-rich answers with source tracking
- Configure research depth with effort levels
Best for
About Gemini DeepSearch
Gemini DeepSearch is a community-built MCP server published by alexcong that provides AI assistants with tools and capabilities via the Model Context Protocol. Gemini DeepSearch automates web research using Google Search API and Gemini models, delivering in-depth, cited insights It is categorized under search web, ai ml.
How to install
You can install Gemini DeepSearch in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Gemini DeepSearch is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Gemini DeepSearch MCP
Gemini DeepSearch MCP is an automated research agent that leverages Google Gemini models and Google Search to perform deep, multi-step web research. It generates sophisticated queries, synthesizes information from search results, identifies knowledge gaps, and produces high-quality, citation-rich answers.
Features
- Automated multi-step research using Gemini models and Google Search
- FastMCP integration for both HTTP API and stdio deployment
- Configurable effort levels (low, medium, high) for research depth
- Citation-rich responses with source tracking
- LangGraph-powered workflow with state management
Usage
Development Server (HTTP + Studio UI)
Start the LangGraph development server with Studio UI:
make dev
Local MCP Server (stdio)
Start the MCP server with stdio transport for integration with MCP clients:
make local
Testing
Run the test suite:
make test
Test the MCP stdio server:
make test_mcp
Use MCP inspector
make inspect
With Langsmith tracing
GEMINI_API_KEY=AI******* LANGSMITH_API_KEY=ls******* LANGSMITH_TRACING=true make inspect
API
The deep_search tool accepts:
- query (string): The research question or topic to investigate
- effort (string): Research effort level - "low", "medium", or "high"
- Low: 1 query, 1 loop, Flash model
- Medium: 3 queries, 2 loops, Flash model
- High: 5 queries, 3 loops, Pro model
Return Format
HTTP MCP Server (Development mode):
- answer: Comprehensive research response with citations
- sources: List of source URLs used in research
Stdio MCP Server (Claude Desktop integration):
- file_path: Path to a JSON file containing the research results
The stdio MCP server writes results to a JSON file in the system temp directory to optimize token usage. The JSON file contains the same answer and sources data as the HTTP version, but is accessed via file path rather than returned directly.
Requirements
- Python 3.12+
GEMINI_API_KEYenvironment variable
Installation
Install directly using uvx:
uvx install gemini-deepsearch-mcp
Claude Desktop Integration
To use the MCP server with Claude Desktop, add this configuration to your Claude Desktop config file:
macOS
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"gemini-deepsearch": {
"command": "uvx",
"args": ["gemini-deepsearch-mcp"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
},
"timeout": 180000
}
}
}
Windows
Edit %APPDATA%/Claude/claude_desktop_config.json:
{
"mcpServers": {
"gemini-deepsearch": {
"command": "uvx",
"args": ["gemini-deepsearch-mcp"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
},
"timeout": 180000
}
}
}
Linux
Edit ~/.config/claude/claude_desktop_config.json:
{
"mcpServers": {
"gemini-deepsearch": {
"command": "uvx",
"args": ["gemini-deepsearch-mcp"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
},
"timeout": 180000
}
}
}
Important:
- Replace
your-gemini-api-key-herewith your actual Gemini API key - Restart Claude Desktop after updating the configuration
- Set ample timeout to avoid
MCP error -32001: Request timed out
Alternative: Local Development Setup
For development or if you prefer to run from source:
{
"mcpServers": {
"gemini-deepsearch": {
"command": "uv",
"args": ["run", "python", "main.py"],
"cwd": "/path/to/gemini-deepsearch-mcp",
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Replace /path/to/gemini-deepsearch-mcp with the actual absolute path to your project directory.
Once configured, you can use the deep_search tool in Claude Desktop by asking questions like:
- "Use deep_search to research the latest developments in quantum computing"
- "Search for information about renewable energy trends with high effort"
Agent Source
The deep search agent is from the Gemini Fullstack LangGraph Quickstart repository.
License
MIT
Alternatives
Related Skills
Browse all skillsFetch content from Reddit using Gemini CLI when WebFetch is blocked. Use when accessing Reddit URLs, researching topics on Reddit, or when Reddit returns 403/blocked errors.
Create professional infographics using Nano Banana Pro AI with smart iterative refinement. Uses Gemini 3 Pro for quality review. Integrates research-lookup and web search for accurate data. Supports 10 infographic types, 8 industry styles, and colorblind-safe palettes.
Run 150+ AI apps via inference.sh CLI - image generation, video creation, LLMs, search, 3D, Twitter automation. Models: FLUX, Veo, Gemini, Grok, Claude, Seedance, OmniHuman, Tavily, Exa, OpenRouter, and many more. Use when running AI apps, generating images/videos, calling LLMs, web search, or automating Twitter. Triggers: inference.sh, infsh, ai model, run ai, serverless ai, ai api, flux, veo, claude api, image generation, video generation, openrouter, tavily, exa search, twitter api, grok
Create a Website Designed to Bring Clients from ChatGPT, Gemini & Modern Search
Turn recipes into a Todoist Shopping list. Extract ingredients from recipe photos (Gemini Flash vision) or recipe web pages (search + fetch), then compare against the existing Shopping project with conservative synonym/overlap rules, skip pantry staples (salt/pepper), and sum quantities when units match. Also saves each cooked recipe into the workspace cookbook (recipes/).
Perform complex, long-running research tasks using Gemini Deep Research Agent. Use when asked to research topics requiring multi-source synthesis, competitive analysis, market research, or comprehensive technical investigations that benefit from systematic web search and analysis.