
AI Hub
Provides unified access to 100+ AI providers (OpenAI, Anthropic, Google, AWS Bedrock, etc.) through a single interface using LiteLLM integration.
Provides unified access to 100+ AI providers through LiteLLM integration, enabling seamless switching between OpenAI, Anthropic, Google, Azure, AWS Bedrock, and other services through YAML-based configuration with tools for chatting, listing models, and retrieving model information.
What it does
- Chat with 100+ AI models from different providers
- List available models across all providers
- Retrieve detailed model information and capabilities
- Switch between AI providers using YAML configuration
- Access custom endpoints and proxy servers
Best for
About AI Hub
AI Hub is a community-built MCP server published by feiskyer that provides AI assistants with tools and capabilities via the Model Context Protocol. AI Hub offers unified access to 100+ AI providers via LiteLLM, enabling seamless switching and configuration with advanc It is categorized under ai ml, developer tools.
How to install
You can install AI Hub in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
AI Hub is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
MCP AI Hub
A Model Context Protocol (MCP) server that provides unified access to various AI providers through LiteLM. Chat with OpenAI, Anthropic, and 100+ other AI models using a single, consistent interface.
🌟 Overview
MCP AI Hub acts as a bridge between MCP clients (like Claude Desktop/Code) and multiple AI providers. It leverages LiteLM's unified API to provide seamless access to 100+ AI models without requiring separate integrations for each provider.
Key Benefits:
- Unified Interface: Single API for all AI providers
- 100+ Providers: OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more
- MCP Protocol: Native integration with Claude Desktop and Claude Code
- Flexible Configuration: YAML-based configuration with Pydantic validation
- Multiple Transports: stdio, SSE, and HTTP transport options
- Custom Endpoints: Support for proxy servers and local deployments
Quick Start
1. Install
Choose your preferred installation method:
# Option A: Install from PyPI
pip install mcp-ai-hub
# Option B: Install with uv (recommended)
uv tool install mcp-ai-hub
# Option C: Install from source
pip install git+https://github.com/your-username/mcp-ai-hub.git
Installation Notes:
uvis a fast Python package installer and resolver- The package requires Python 3.10 or higher
- All dependencies are automatically resolved and installed
2. Configure
Create a configuration file at ~/.ai_hub.yaml with your API keys and model configurations:
model_list:
- model_name: gpt-4 # Friendly name you'll use in MCP tools
litellm_params:
model: openai/gpt-4 # LiteLM provider/model identifier
api_key: "sk-your-openai-api-key-here" # Your actual OpenAI API key
max_tokens: 2048 # Maximum response tokens
temperature: 0.7 # Response creativity (0.0-1.0)
- model_name: claude-sonnet
litellm_params:
model: anthropic/claude-3-5-sonnet-20241022
api_key: "sk-ant-your-anthropic-api-key-here"
max_tokens: 4096
temperature: 0.7
Configuration Guidelines:
- API Keys: Replace placeholder keys with your actual API keys
- Model Names: Use descriptive names you'll remember (e.g.,
gpt-4,claude-sonnet) - LiteLM Models: Use LiteLM's provider/model format (e.g.,
openai/gpt-4,anthropic/claude-3-5-sonnet-20241022) - Parameters: Configure
max_tokens,temperature, and other LiteLM-supported parameters - Security: Keep your config file secure with appropriate file permissions (chmod 600)
3. Connect to Claude Desktop
Configure Claude Desktop to use MCP AI Hub by editing your configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"ai-hub": {
"command": "mcp-ai-hub"
}
}
}
4. Connect to Claude Code
claude mcp add -s user ai-hub mcp-ai-hub
Advanced Usage
CLI Options and Transport Types
MCP AI Hub supports multiple transport mechanisms for different use cases:
Command Line Options:
# Default stdio transport (for MCP clients like Claude Desktop)
mcp-ai-hub
# Server-Sent Events transport (for web applications)
mcp-ai-hub --transport sse --host 0.0.0.0 --port 3001
# Streamable HTTP transport (for direct API calls)
mcp-ai-hub --transport http --port 8080
# Custom config file and debug logging
mcp-ai-hub --config /path/to/config.yaml --log-level DEBUG
Transport Type Details:
| Transport | Use Case | Default Host:Port | Description |
|---|---|---|---|
stdio | MCP clients (Claude Desktop/Code) | N/A | Standard input/output, default for MCP |
sse | Web applications | localhost:3001 | Server-Sent Events for real-time web apps |
http | Direct API calls | localhost:3001 (override with --port) | HTTP transport with streaming support |
CLI Arguments:
--transport {stdio,sse,http}: Transport protocol (default: stdio)--host HOST: Host address for SSE/HTTP (default: localhost)--port PORT: Port number for SSE/HTTP (default: 3001; override if you need a different port)--config CONFIG: Custom config file path (default: ~/.ai_hub.yaml)--log-level {DEBUG,INFO,WARNING,ERROR}: Logging verbosity (default: INFO)
Usage
Once MCP AI Hub is connected to your MCP client, you can interact with AI models using these tools:
MCP Tool Reference
Primary Chat Tool:
chat(model_name: str, message: str | list[dict]) -> str
- model_name: Name of the configured model (e.g., "gpt-4", "claude-sonnet")
- message: String message or OpenAI-style message list
- Returns: AI model response as string
Model Discovery Tools:
list_models() -> list[str]
- Returns: List of all configured model names
get_model_info(model_name: str) -> dict
- model_name: Name of the configured model
- Returns: Model configuration details including provider, parameters, etc.
Configuration
MCP AI Hub supports 100+ AI providers through LiteLM. Configure your models in ~/.ai_hub.yaml with API keys and custom parameters.
System Prompts
You can define system prompts at two levels:
global_system_prompt: Applied to all models by default- Per-model
system_prompt: Overrides the global prompt for that model
Precedence: model-specific prompt > global prompt. If a model's system_prompt is set to an empty string, it disables the global prompt for that model.
global_system_prompt: "You are a helpful AI assistant. Be concise."
model_list:
- model_name: gpt-4
system_prompt: "You are a precise coding assistant."
litellm_params:
model: openai/gpt-4
api_key: "sk-your-openai-api-key"
- model_name: claude-sonnet
# Empty string disables the global prompt for this model
system_prompt: ""
litellm_params:
model: anthropic/claude-3-5-sonnet-20241022
api_key: "sk-ant-your-anthropic-api-key"
Notes:
- The server prepends the configured system prompt to the message list it sends to providers.
- If you pass an explicit message list that already contains a
systemmessage, both system messages will be included in order (configured prompt first).
Supported Providers
Major AI Providers:
- OpenAI: GPT-4, GPT-3.5-turbo, GPT-4-turbo, etc.
- Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- Google: Gemini Pro, Gemini Pro Vision, Gemini Ultra
- Azure OpenAI: Azure-hosted OpenAI models
- AWS Bedrock: Claude, Llama, Jurassic, and more
- Together AI: Llama, Mistral, Falcon, and open-source models
- Hugging Face: Various open-source models
- Local Models: Ollama, LM Studio, and other local deployments
Configuration Parameters:
- api_key: Your provider API key (required)
- max_tokens: Maximum response tokens (optional)
- temperature: Response creativity 0.0-1.0 (optional)
- api_base: Custom endpoint URL (for proxies/local servers)
- Additional: All LiteLM-supported parameters
Configuration Examples
Basic Configuration:
global_system_prompt: "You are a helpful AI assistant. Be concise."
model_list:
- model_name: gpt-4
system_prompt: "You are a precise coding assistant." # overrides global
litellm_params:
model: openai/gpt-4
api_key: "sk-your-actual-openai-api-key"
max_tokens: 2048
temperature: 0.7
- model_name: claude-sonnet
litellm_params:
model: anthropic/claude-3-5-sonnet-20241022
api_key: "sk-ant-your-actual-anthropic-api-key"
max_tokens: 4096
temperature: 0.7
Custom Parameters:
model_list:
- model_name: gpt-4-creative
litellm_params:
model: openai/gpt-4
api_key: "sk-your-openai-key"
max_tokens: 4096
temperature: 0.9 # Higher creativity
top_p: 0.95
frequency_penalty: 0.1
presence_penalty: 0.1
- model_name: claude-analytical
litellm_params:
model: anthropic/claude-3-5-sonnet-20241022
api_key: "sk-ant-your-anthropic-key"
max_tokens: 8192
temperature: 0.3 # Lower creativity for analytical tasks
stop_sequences: ["\n\n", "Human:"]
Local LLM Server Configuration:
model_list:
- model_name: local-llama
litellm_params:
model: openai/llama-2-7b-chat
api_key: "dummy-key" # Local servers often accept any API key
api_base: "http://localhost:8080/v1" # Local OpenAI-compatible server
max_tokens: 2048
temperature: 0.7
For more providers, please refer to the LiteLLM docs: https://docs.litellm.ai/docs/providers.
Development
Setup:
# Install all dependencies including dev dependencies
uv sync
# Install package in development mode
uv pip install -e ".[dev]"
# Add new runtime dependencies
uv add package_name
# Add new development dependencies
uv add --dev package_name
# Update dependencies
uv sync --upgrade
Running and Testing:
# Run the MCP server
uv run mcp-ai-hub
# Run with custom configuration
uv run mcp-ai-hub --config ./custom_config.yaml --log-level DEBUG
# Run with different transport
uv run mcp-ai-hub --transport sse --port 3001
# Run tests (when test suite is added)
uv run pytest
# Run tests with coverage
uv run pytest --cov=src/mcp_
---
*README truncated. [View full README on GitHub](https://github.com/feiskyer/mcp-ai-hub).*
Alternatives
Related Skills
Browse all skillsUI design system toolkit for Senior UI Designer including design token generation, component documentation, responsive design calculations, and developer handoff tools. Use for creating design systems, maintaining visual consistency, and facilitating design-dev collaboration.
Find, connect, and use MCP tools and skills via the Smithery CLI. Use when the user searches for new tools or skills, wants to discover integrations, connect to an MCP, install a skill, or wants to interact with an external service (email, Slack, Discord, GitHub, Jira, Notion, databases, cloud APIs, monitoring, etc.).
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".
Access 1200+ AI Agent tools via Model Context Protocol (MCP)
Build agentic applications with GitHub Copilot SDK. Use when embedding AI agents in apps, creating custom tools, implementing streaming responses, managing sessions, connecting to MCP servers, or creating custom agents. Triggers on Copilot SDK, GitHub SDK, agentic app, embed Copilot, programmable agent, MCP server, custom agent.
Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals. Use PROACTIVELY for API documentation or developer portal creation.