
Gemini Bridge
Connects Claude to Google's Gemini AI through the official Gemini CLI, allowing you to query Gemini models and share file context between the two language models.
Bridges Claude with Google's Gemini AI through the official Gemini CLI, enabling direct queries and file-based context sharing between the two language models.
What it does
- Send queries to Gemini models
- Share file context with Gemini
- Execute Gemini CLI commands
- Analyze files using Gemini
Best for
About Gemini Bridge
Gemini Bridge is a community-built MCP server published by elyin that provides AI assistants with tools and capabilities via the Model Context Protocol. Bridge Claude and Google's Gemini AI using the official Gemini CLI. Enable direct queries and file sharing between model It is categorized under ai ml, developer tools. This server exposes 2 tools that AI clients can invoke during conversations and coding sessions.
How to install
You can install Gemini Bridge in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Gemini Bridge is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (2)
Send a query directly to the Gemini CLI. Args: query: Prompt text forwarded verbatim to the CLI. directory: Working directory used for command execution. model: Optional model alias (``flash``, ``pro``) or full Gemini model id. timeout_seconds: Optional per-call timeout override in seconds. Returns: Gemini's response text or an explanatory error string.
Send a query to the Gemini CLI with file context. Args: query: Prompt text forwarded to the CLI. directory: Working directory used for resolving relative file paths. files: Relative or absolute file paths to include alongside the prompt. model: Optional model alias (``flash``, ``pro``) or full Gemini model id. timeout_seconds: Optional per-call timeout override in seconds. mode: ``"inline"`` streams truncated snippets; ``"at_command"`` emits ``@path`` directives so Gemini CLI resolves files itself. Returns: Gemini's response or an explanatory error string with any warnings.
Gemini Bridge
A lightweight MCP (Model Context Protocol) server that enables AI coding assistants to interact with Google's Gemini AI through the official CLI. Works with Claude Code, Cursor, VS Code, and other MCP-compatible clients. Designed for simplicity, reliability, and seamless integration.
✨ Features
- Direct Gemini CLI Integration: Zero API costs using official Gemini CLI
- Simple MCP Tools: Two core functions for basic queries and file analysis
- Stateless Operation: No sessions, caching, or complex state management
- Production Ready: Robust error handling with configurable 60-second timeouts
- Minimal Dependencies: Only requires
mcp>=1.0.0and Gemini CLI - Easy Deployment: Support for both uvx and traditional pip installation
- Universal MCP Compatibility: Works with any MCP-compatible AI coding assistant
🚀 Quick Start
Prerequisites
-
Install Gemini CLI:
npm install -g @google/gemini-cli -
Authenticate with Gemini:
gemini auth login -
Verify installation:
gemini --version
Installation
🎯 Recommended: PyPI Installation
# Install from PyPI
pip install gemini-bridge
# Add to Claude Code with uvx (recommended)
claude mcp add gemini-bridge -s user -- uvx gemini-bridge
Alternative: From Source
# Clone the repository
git clone https://github.com/shelakh/gemini-bridge.git
cd gemini-bridge
# Build and install locally
uvx --from build pyproject-build
pip install dist/*.whl
# Add to Claude Code
claude mcp add gemini-bridge -s user -- uvx gemini-bridge
Development Installation
# Clone and install in development mode
git clone https://github.com/shelakh/gemini-bridge.git
cd gemini-bridge
pip install -e .
# Add to Claude Code (development)
claude mcp add gemini-bridge-dev -s user -- python -m src
🌐 Multi-Client Support
Gemini Bridge works with any MCP-compatible AI coding assistant - the same server supports multiple clients through different configuration methods.
Supported MCP Clients
- Claude Code ✅ (Default)
- Cursor ✅
- VS Code ✅
- Windsurf ✅
- Cline ✅
- Void ✅
- Cherry Studio ✅
- Augment ✅
- Roo Code ✅
- Zencoder ✅
- Any MCP-compatible client ✅
Configuration Examples
Claude Code (Default)
# Recommended installation
claude mcp add gemini-bridge -s user -- uvx gemini-bridge
# Development installation
claude mcp add gemini-bridge-dev -s user -- python -m src
Cursor
Global Configuration (~/.cursor/mcp.json):
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Project-Specific (.cursor/mcp.json in your project):
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Go to: Settings → Cursor Settings → MCP → Add new global MCP server
VS Code
Configuration (.vscode/mcp.json in your workspace):
{
"servers": {
"gemini-bridge": {
"type": "stdio",
"command": "uvx",
"args": ["gemini-bridge"]
}
}
}
Alternative: Through Extensions
- Open Extensions view (Ctrl+Shift+X)
- Search for MCP extensions
- Add custom server with command:
uvx gemini-bridge
Windsurf
Add to your Windsurf MCP configuration:
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Cline (VS Code Extension)
- Open Cline and click MCP Servers in the top navigation
- Select Installed tab → Advanced MCP Settings
- Add to
cline_mcp_settings.json:
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Void
Go to: Settings → MCP → Add MCP Server
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Cherry Studio
- Navigate to Settings → MCP Servers → Add Server
- Fill in the server details:
- Name:
gemini-bridge - Type:
STDIO - Command:
uvx - Arguments:
["gemini-bridge"]
- Name:
- Save the configuration
Augment
Using the UI:
- Click hamburger menu → Settings → Tools
- Click + Add MCP button
- Enter command:
uvx gemini-bridge - Name: Gemini Bridge
Manual Configuration:
"augment.advanced": {
"mcpServers": [
{
"name": "gemini-bridge",
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
]
}
Roo Code
- Go to Settings → MCP Servers → Edit Global Config
- Add to
mcp_settings.json:
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Zencoder
- Go to Zencoder menu (...) → Tools → Add Custom MCP
- Add configuration:
{
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
- Hit the Install button
Alternative Installation Methods
For pip-based installations:
{
"command": "gemini-bridge",
"args": [],
"env": {}
}
For development/local testing:
{
"command": "python",
"args": ["-m", "src"],
"env": {},
"cwd": "/path/to/gemini-bridge"
}
For npm-style installation (if needed):
{
"command": "npx",
"args": ["gemini-bridge"],
"env": {}
}
Universal Usage
Once configured with any client, use the same two tools:
- Ask general questions: "What authentication patterns are used in this codebase?"
- Analyze specific files: "Review these auth files for security issues"
The server implementation is identical - only the client configuration differs!
⚙️ Configuration
Timeout Configuration
By default, Gemini Bridge uses a 60-second timeout for all CLI operations. For longer queries (large files, complex analysis), you can configure a custom timeout using the GEMINI_BRIDGE_TIMEOUT environment variable.
Example configurations:
Claude Code
# Add with custom timeout (120 seconds)
claude mcp add gemini-bridge -s user --env GEMINI_BRIDGE_TIMEOUT=120 -- uvx gemini-bridge
Manual Configuration (mcp_settings.json)
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {
"GEMINI_BRIDGE_TIMEOUT": "120"
}
}
}
}
Timeout Options:
- Default: 60 seconds (if not configured)
- Range: Any positive integer (seconds)
- Per-call override: Supply
timeout_secondsto either tool for one-off extensions - Recommended: 120-300 seconds for large file analysis
- Invalid values: Fall back to 60 seconds with warning
🛠️ Available Tools
consult_gemini
Direct CLI bridge for simple queries.
Parameters:
query(string): The question or prompt to send to Geminidirectory(string): Working directory for the query (default: current directory)model(string, optional): Model to use - "flash" or "pro" (default: "flash")timeout_seconds(int, optional): Override the execution timeout for this request
Example:
consult_gemini(
query="Find authentication patterns in this codebase",
directory="/path/to/project",
model="flash"
)
consult_gemini_with_files
CLI bridge with file attachments for detailed analysis.
Parameters:
query(string): The question or prompt to send to Geminidirectory(string): Working directory for the queryfiles(list): List of file paths relative to the directorymodel(string, optional): Model to use - "flash" or "pro" (default: "flash")timeout_seconds(int, optional): Override the execution timeout for this requestmode(string, optional): Either"inline"(default) to stream file contents or"at_command"to let Gemini CLI resolve@pathreferences itself
Example:
consult_gemini_with_files(
query="Analyze these auth files and suggest improvements",
directory="/path/to/project",
files=["src/auth.py", "src/models.py"],
model="pro",
timeout_seconds=180
)
Tip: When scanning large trees, switch to mode="at_command" so the Gemini CLI handles file globbing and truncation natively.
📋 Usage Examples
Basic Code Analysis
# Simple research query
consult_gemini(
query="What authentication patterns are used in this project?",
directory="/Users/dev/my-project"
)
Detailed File Review
# Analyze specific files
consult_gemini_with_files(
query="Review these files and suggest security imp
---
*README truncated. [View full README on GitHub](https://github.com/elyin/gemini-bridge).*
Alternatives
Related Skills
Browse all skillsUI design system toolkit for Senior UI Designer including design token generation, component documentation, responsive design calculations, and developer handoff tools. Use for creating design systems, maintaining visual consistency, and facilitating design-dev collaboration.
Bridge between Claude Code and OpenAI Codex CLI - generates AGENTS.md from CLAUDE.md, provides Codex CLI execution helpers, and enables seamless interoperability between both tools
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".
Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals. Use PROACTIVELY for API documentation or developer portal creation.
Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases). Prefer the OpenAI Developer Documentation MCP server tools when available; otherwise guide the user to enable `openaiDeveloperDocs`.
Guide for building TypeScript CLIs with Bun. Use when creating command-line tools, adding subcommands to existing CLIs, or building developer tooling. Covers argument parsing, subcommand patterns, output formatting, and distribution.