
Omnisearch
Provides unified search across multiple providers (Tavily, Brave, Perplexity, Kagi, Exa, GitHub) through a single interface with advanced search operators and content processing.
Unifies search and content processing by dynamically selecting optimal providers like Tavily, Brave, and Perplexity to enable flexible information retrieval and enhancement across multiple domains.
What it does
- Search across 7+ providers including Tavily, Brave, and Perplexity
- Use advanced search operators like site:, filetype:, and date filters
- Search GitHub repositories, code, and users
- Process and enhance content with AI tools
- Filter results by domain, language, and location
- Extract semantic information with neural search
Best for
About Omnisearch
Omnisearch is a community-built MCP server published by spences10 that provides AI assistants with tools and capabilities via the Model Context Protocol. Omnisearch unifies search by selecting top providers like Tavily, Brave, and Perplexity for flexible, enhanced content r It is categorized under search web.
How to install
You can install Omnisearch in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Omnisearch is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
mcp-omnisearch
A Model Context Protocol (MCP) server that provides unified access to multiple search providers and AI tools. This server combines the capabilities of Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer comprehensive search, AI responses, content processing, and enhancement features through a single interface.
Features
🔍 Search Tools
- Tavily Search: Optimized for factual information with strong citation support. Supports domain filtering through API parameters (include_domains/exclude_domains).
- Brave Search: Privacy-focused search with comprehensive operator
support:
site:,-site:,filetype:/ext:,intitle:,inurl:,inbody:,inpage:,lang:,loc:,before:,after:,+term,-term, and exact phrases ("phrase"). - Kagi Search: High-quality search with full operator support:
site:,-site:,filetype:/ext:,intitle:,inurl:,inbody:,inpage:,lang:,loc:,before:,after:,+term,-term, and exact phrases ("phrase"). - Exa Search: AI-powered web search using neural and keyword search. Optimized for AI applications with semantic understanding, content extraction, and research capabilities.
- GitHub Search: Comprehensive code search across public GitHub
repositories with three specialized tools:
- Code Search: Find code examples, function definitions, and
files using advanced syntax (
filename:,path:,repo:,user:,language:,in:file) - Repository Search: Discover repositories with sorting by stars, forks, or recent updates
- User Search: Find GitHub users and organizations
- Code Search: Find code examples, function definitions, and
files using advanced syntax (
🎯 Search Operators
MCP Omnisearch provides powerful search capabilities through operators and parameters:
Search Operator Reference
Brave & Kagi Operators (use in query string):
- Domain:
site:example.com,-site:example.com - File type:
filetype:pdforext:pdf - Location:
intitle:term,inurl:term,inbody:term,inpage:term - Language:
lang:en(ISO 639-1 codes) - Country:
loc:us(ISO 3166-1 codes) - Date:
before:2024,after:2024-01-01 - Exact:
"exact phrase" - Include/Exclude:
+required,-excluded
Tavily (API parameters only):
- Domain filtering:
include_domains,exclude_domains
Example Usage
// Brave/Kagi: Advanced operators in query
{
"query": "filetype:pdf lang:en site:microsoft.com +typescript -javascript",
"provider": "brave"
}
// Brave/Kagi: Search gists
{
"query": "site:gist.github.com claude code settings",
"provider": "brave"
}
// Tavily: API parameters for domain filtering
{
"query": "typescript guide",
"provider": "tavily",
"include_domains": ["microsoft.com"]
}
Provider Capabilities
- Brave Search: Full native operator support in query string
- Kagi Search: Complete operator support in query string
- Tavily Search: Domain filtering through API parameters
- Exa Search: Domain filtering through API parameters, semantic search with neural understanding
- GitHub Search: Advanced code search syntax with qualifiers:
filename:remote.ts- Search for specific filespath:src/lib- Search within specific directoriesrepo:user/repo- Search within specific repositoriesuser:username- Search within a user's repositorieslanguage:typescript- Filter by programming languagein:file "export function"- Search for text within files
🤖 AI Response Tools
- Perplexity AI: Advanced response generation combining real-time web search with GPT-4 Omni and Claude 3
- Kagi FastGPT: Quick AI-generated answers with citations (900ms typical response time)
- Exa Answer: Get direct AI-generated answers to questions using Exa Answer API
📄 Content Processing Tools
- Jina AI Reader: Clean content extraction with image captioning and PDF support
- Kagi Universal Summarizer: Content summarization for pages, videos, and podcasts
- Tavily Extract: Extract raw content from single or multiple web pages with configurable extraction depth ('basic' or 'advanced'). Returns both combined content and individual URL content, with metadata including word count and extraction statistics
- Firecrawl Scrape: Extract clean, LLM-ready data from single URLs with enhanced formatting options
- Firecrawl Crawl: Deep crawling of all accessible subpages on a website with configurable depth limits
- Firecrawl Map: Fast URL collection from websites for comprehensive site mapping
- Firecrawl Extract: Structured data extraction with AI using natural language prompts
- Firecrawl Actions: Support for page interactions (clicking, scrolling, etc.) before extraction for dynamic content
- Exa Contents: Extract full content from Exa search result IDs
- Exa Similar: Find web pages semantically similar to a given URL using Exa
🔄 Enhancement Tools
- Kagi Enrichment API: Supplementary content from specialized indexes (Teclis, TinyGem)
- Jina AI Grounding: Real-time fact verification against web knowledge
Flexible API Key Requirements
MCP Omnisearch is designed to work with the API keys you have available. You don't need to have keys for all providers - the server will automatically detect which API keys are available and only enable those providers.
For example:
- If you only have a Tavily and Perplexity API key, only those providers will be available
- If you don't have a Kagi API key, Kagi-based services won't be available, but all other providers will work normally
- The server will log which providers are available based on the API keys you've configured
This flexibility makes it easy to get started with just one or two providers and add more as needed.
Configuration
This server requires configuration through your MCP client. Here are examples for different environments:
Cline Configuration
Add this to your Cline MCP settings:
{
"mcpServers": {
"mcp-omnisearch": {
"command": "node",
"args": ["/path/to/mcp-omnisearch/dist/index.js"],
"env": {
"TAVILY_API_KEY": "your-tavily-key",
"PERPLEXITY_API_KEY": "your-perplexity-key",
"KAGI_API_KEY": "your-kagi-key",
"JINA_AI_API_KEY": "your-jina-key",
"BRAVE_API_KEY": "your-brave-key",
"GITHUB_API_KEY": "your-github-key",
"EXA_API_KEY": "your-exa-key",
"FIRECRAWL_API_KEY": "your-firecrawl-key",
"FIRECRAWL_BASE_URL": "http://localhost:3002"
},
"disabled": false,
"autoApprove": []
}
}
}
Claude Desktop with WSL Configuration
For WSL environments, add this to your Claude Desktop configuration:
{
"mcpServers": {
"mcp-omnisearch": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"TAVILY_API_KEY=key1 PERPLEXITY_API_KEY=key2 KAGI_API_KEY=key3 JINA_AI_API_KEY=key4 BRAVE_API_KEY=key5 GITHUB_API_KEY=key6 EXA_API_KEY=key7 FIRECRAWL_API_KEY=key8 FIRECRAWL_BASE_URL=http://localhost:3002 node /path/to/mcp-omnisearch/dist/index.js"
]
}
}
}
Environment Variables
The server uses API keys for each provider. You don't need keys for all providers - only the providers corresponding to your available API keys will be activated:
TAVILY_API_KEY: For Tavily SearchPERPLEXITY_API_KEY: For Perplexity AIKAGI_API_KEY: For Kagi services (FastGPT, Summarizer, Enrichment)JINA_AI_API_KEY: For Jina AI services (Reader, Grounding)BRAVE_API_KEY: For Brave SearchGITHUB_API_KEY: For GitHub search services (Code, Repository, User search)EXA_API_KEY: For Exa AI services (Search, Answer, Contents, Similar)FIRECRAWL_API_KEY: For Firecrawl services (Scrape, Crawl, Map, Extract, Actions)FIRECRAWL_BASE_URL: For self-hosted Firecrawl instances (optional, defaults to Firecrawl cloud service)
You can start with just one or two API keys and add more later as needed. The server will log which providers are available on startup.
GitHub API Key Setup
To use GitHub search features, you'll need a GitHub personal access token with public repository access only for security:
-
Go to GitHub Settings: Navigate to GitHub Settings > Developer settings > Personal access tokens
-
Create a new token: Click "Generate new token" → "Generate new token (classic)"
-
Configure token settings:
-
Name:
MCP Omnisearch - Public Search -
Expiration: Choose your preferred expiration (90 days recommended)
-
Scopes: Leave all checkboxes UNCHECKED
⚠️ Important: Do not select any scopes. An empty scope token can only access public repositories and user profiles, which is exactly what we want for search functionality.
-
-
Generate and copy: Click "Generate token" and copy the token immediately
-
Add to environment: Set
GITHUB_API_KEY=your_token_here
Security Notes:
- This token configuration ensures no access to private repositories
- Only public code search, repository discovery, and user profiles are accessible
- Rate limits: 5,000 requests/hour for code search, 10 requests/minute for code search specifically
- You can revoke the token anytime from GitHub settings if needed
Self-Hosted Firecrawl Configuration
If you're running a self-hosted instance of Firecrawl, you can
configure MCP Omnisearch to use it by setting the FIRECRAWL_BASE_URL
environment variable. This allows you to maintain complete control
over your data processing pipeline.
Self-hosted Firecrawl setup:
- Follow the Firecrawl self-hosting guide
- Set up your Firecrawl instance (default runs on `http://localh
README truncated. View full README on GitHub.
Alternatives
Related Skills
Browse all skillsMANDATORY web search tool for current information, news, prices, facts, or any data not in your training. This is THE ONLY way to search the internet in this OpenClaw environment. ALWAYS use this skill when the user asks for web searches or when you need up-to-date information.
Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation
Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards — purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.
Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.
Search Engine Optimization specialist for content strategy, technical SEO, keyword research, and ranking improvements. Use when optimizing website content, improving search rankings, conducting keyword analysis, or implementing SEO best practices. Expert in on-page SEO, meta tags, schema markup, and Core Web Vitals.