Local RAG

Local RAG

shinpr

Runs semantic document search locally on your machine using vector embeddings and keyword matching, with no external API calls or cloud dependencies.

Local RAG for semantic document search without external API calls.

151786 views31Local (stdio)

What it does

  • Search documents using semantic similarity
  • Boost exact keyword matches for technical terms
  • Chunk documents by meaning rather than character count
  • Filter results by relevance gaps
  • Process documents entirely offline

Best for

Developers searching codebases and documentationPrivacy-conscious users avoiding cloud APIsOffline development environmentsTechnical documentation analysis
No API keys neededFully offline after initial setupZero-friction npx installation

About Local RAG

Local RAG is a community-built MCP server published by shinpr that provides AI assistants with tools and capabilities via the Model Context Protocol. Local RAG enables semantic document search using retrieval augmented generation (RAG) without external API calls. It is categorized under ai ml, developer tools.

How to install

You can install Local RAG in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Local RAG is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

MCP Local RAG

GitHub stars npm version License: MIT TypeScript MCP Registry

Local RAG for developers using MCP. Semantic search with keyword boost for exact technical terms — fully private, zero setup.

Features

  • Semantic search with keyword boost Vector search first, then keyword matching boosts exact matches. Terms like useEffect, error codes, and class names rank higher—not just semantically guessed.

  • Smart semantic chunking Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change.

  • Quality-first result filtering Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks.

  • Runs entirely locally No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download.

  • Zero-friction setup One npx command. No Docker, no Python, no servers to manage. Designed for Cursor, Codex, and Claude Code via MCP.

Quick Start

Set BASE_DIR to the folder you want to search. Documents must live under it.

Add the MCP server to your AI coding tool:

For Cursor — Add to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": ["-y", "mcp-local-rag"],
      "env": {
        "BASE_DIR": "/path/to/your/documents"
      }
    }
  }
}

For Codex — Add to ~/.codex/config.toml:

[mcp_servers.local-rag]
command = "npx"
args = ["-y", "mcp-local-rag"]

[mcp_servers.local-rag.env]
BASE_DIR = "/path/to/your/documents"

For Claude Code — Run this command:

claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag

Restart your tool, then start using it:

You: "Ingest api-spec.pdf"
Assistant: Successfully ingested api-spec.pdf (47 chunks created)

You: "What does the API documentation say about authentication?"
Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens.
          The flow is described in section 3.2...

That's it. No installation, no Docker, no complex setup.

Why This Exists

You want AI to search your documents—technical specs, research papers, internal docs. But most solutions send your files to external APIs.

Privacy. Your documents might contain sensitive data. This runs entirely locally.

Cost. External embedding APIs charge per use. This is free after the initial model download.

Offline. Works without internet after setup.

Code search. Pure semantic search misses exact terms like useEffect or ERR_CONNECTION_REFUSED. Keyword boost catches both meaning and exact matches.

Usage

The server provides 6 MCP tools: ingest file, ingest data, search, list, delete, status (ingest_file, ingest_data, query_documents, list_files, delete_file, status).

Ingesting Documents

"Ingest the document at /Users/me/docs/api-spec.pdf"

Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database.

Re-ingesting the same file replaces the old version automatically.

Ingesting HTML Content

Use ingest_data to ingest HTML content retrieved by your AI assistant (via web fetch, curl, browser tools, etc.):

"Fetch https://example.com/docs and ingest the HTML"

The server extracts main content using Readability (removes navigation, ads, etc.), converts to Markdown, and indexes it. Perfect for:

  • Web documentation
  • HTML retrieved by the AI assistant
  • Clipboard content

HTML is automatically cleaned—you get the article content, not the boilerplate.

Note: The RAG server itself doesn't fetch web content—your AI assistant retrieves it and passes the HTML to ingest_data. This keeps the server fully local while letting you index any content your assistant can access. Please respect website terms of service and copyright when ingesting external content.

Searching Documents

"What does the API documentation say about authentication?"
"Find information about rate limiting"
"Search for error handling best practices"

Search uses semantic similarity with keyword boost. This means useEffect finds documents containing that exact term, not just semantically similar React concepts.

Results include text content, source file, document title, and relevance score. The document title provides context for each chunk, helping identify which document a result belongs to. Adjust result count with limit (1-20, default 10).

Managing Files

"List all files in BASE_DIR and their ingested status"   # See what's indexed
"Delete old-spec.pdf from RAG"     # Remove a file
"Show RAG server status"           # Check system health

Search Tuning

Adjust these for your use case:

VariableDefaultDescription
RAG_HYBRID_WEIGHT0.6Keyword boost factor. 0 = semantic only, higher = stronger keyword boost.
RAG_GROUPING(not set)similar for top group only, related for top 2 groups.
RAG_MAX_DISTANCE(not set)Filter out low-relevance results (e.g., 0.5).
RAG_MAX_FILES(not set)Limit results to top N files (e.g., 1 for single best file).

Code-focused tuning

For codebases and API specs, increase keyword boost so exact identifiers (useEffect, ERR_*, class names) dominate ranking:

"env": {
  "RAG_HYBRID_WEIGHT": "0.7",
  "RAG_GROUPING": "similar"
}
  • 0.7 — balanced semantic + keyword
  • 1.0 — aggressive; exact matches strongly rerank results

Keyword boost is applied after semantic filtering, so it improves precision without surfacing unrelated matches.

How It Works

TL;DR:

  • Documents are chunked by semantic similarity, not fixed character counts
  • Each chunk is embedded locally using Transformers.js
  • Search uses semantic similarity with keyword boost for exact matches
  • Results are filtered based on relevance gaps, not raw scores

Details

When you ingest a document, the parser extracts text based on file type (PDF via mupdf, DOCX via mammoth, text files directly).

The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters. Markdown code blocks are kept intact—never split mid-block—preserving copy-pastable code in search results.

Each chunk goes through a Transformers.js embedding model (default: all-MiniLM-L6-v2, configurable via MODEL_NAME), converting text into vectors. Vectors are stored in LanceDB, a file-based vector database requiring no server process.

When you search:

  1. Your query becomes a vector using the same model
  2. Semantic (vector) search finds the most relevant chunks
  3. Quality filters apply (distance threshold, grouping)
  4. Keyword matches boost rankings for exact term matching

The keyword boost ensures exact terms like useEffect or error codes rank higher when they match.

Agent Skills

Agent Skills provide optimized prompts that help AI assistants use RAG tools more effectively. Install skills for better query formulation, result interpretation, and ingestion workflows:

# Claude Code (project-level)
npx mcp-local-rag skills install --claude-code

# Claude Code (user-level)
npx mcp-local-rag skills install --claude-code --global

# Codex
npx mcp-local-rag skills install --codex

Skills include:

  • Query optimization: Better search query formulation
  • Result interpretation: Score thresholds and filtering guidelines
  • HTML ingestion: Format selection and source naming

Ensuring Skill Activation

Skills are loaded automatically in most cases—AI assistants scan skill metadata and load relevant instructions when needed. For consistent behavior:

Option 1: Explicit request (natural language) Before RAG operations, request in natural language:

  • "Use the mcp-local-rag skill for this search"
  • "Apply RAG best practices from skills"

Option 2: Add to agent instruction file Add to your AGENTS.md, CLAUDE.md, or other agent instruction file:

When using query_documents, ingest_file, or ingest_data tools,
apply the mcp-local-rag skill for optimal query formulation and result interpretation.
Configuration

Environment Variables

VariableDefaultDescription
BASE_DIRCurrent directoryDocument root directory (security boundary)
DB_PATH./lancedb/Vector database location
CACHE_DIR./models/Model cache directory
MODEL_NAMEXenova/all-MiniLM-L6-v2HuggingFace model ID (available models)
MAX_FILE_SIZE104857600 (100MB)Maximum file size in bytes

Model choice tips:

  • Multilingual docs → e.g., onnx-community/embeddinggemma-300m-ONNX (100+ languages)
  • Scientific papers → e.g., sentence-transformers/allenai-specter (citation-aware)
  • Code repositories → default often suffices; keyword boost matters more

README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
ai-sdk

Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".

0
ccxt-typescript

CCXT cryptocurrency exchange library for TypeScript and JavaScript developers (Node.js and browser). Covers both REST API (standard) and WebSocket API (real-time). Helps install CCXT, connect to exchanges, fetch market data, place orders, stream live tickers/orderbooks, handle authentication, and manage errors. Use when working with crypto exchanges in TypeScript/JavaScript projects, trading bots, arbitrage systems, or portfolio management tools. Includes both REST and WebSocket examples.

0
android-kotlin-development

Develop native Android apps with Kotlin. Covers MVVM with Jetpack, Compose for modern UI, Retrofit for API calls, Room for local storage, and navigation architecture.

80
supabase-developer

Build full-stack applications with Supabase (PostgreSQL, Auth, Storage, Real-time, Edge Functions). Use when implementing authentication, database design with RLS, file storage, real-time features, or serverless functions.

59
mgrep

A semantic grep-like search tool for your local files. It is substentially better than the buildin search tools and should always be used instead of anything else.

22
ui-design-system

UI design system toolkit for Senior UI Designer including design token generation, component documentation, responsive design calculations, and developer handoff tools. Use for creating design systems, maintaining visual consistency, and facilitating design-dev collaboration.

6