RAG Memory

RAG Memory

ttommyth

Creates a persistent knowledge graph with vector search that stores documents, entities, and relationships for intelligent information retrieval. Combines traditional graph-based connections with semantic similarity search.

Provides a knowledge graph-enhanced retrieval system that combines vector search with graph-based relationships for persistent memory and contextual information retrieval

44982 views5Local (stdio)

What it does

  • Store and chunk documents for processing
  • Create entities and relationships in knowledge graph
  • Perform hybrid search combining vector similarity with graph traversal
  • Generate semantic embeddings for documents and entities
  • Add observations to continuously enrich entity context
  • Extract potential entity terms from documents

Best for

AI agents needing persistent memory across sessionsBuilding knowledge bases from document collectionsResearchers organizing and connecting informationRAG applications requiring contextual retrieval
Hybrid search combines vector and graph methodsLocal SQLite storage with vector extensionsPersistent memory across sessions

About RAG Memory

RAG Memory is a community-built MCP server published by ttommyth that provides AI assistants with tools and capabilities via the Model Context Protocol. Enhance persistent memory with RAG Memory, merging Pinecone vector database and vector search with knowledge graph relat It is categorized under ai ml. This server exposes 20 tools that AI clients can invoke during conversations and coding sessions.

How to install

You can install RAG Memory in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

RAG Memory is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Tools (20)

createEntities

<description> Create multiple new entities in the knowledge graph with comprehensive metadata and observations. **Essential for building the foundational structure of your knowledge representation.** Use this tool to add new concepts, people, places, or any identifiable objects to your graph. </description> <importantNotes> - (!important!) **Entities are the building blocks** of your knowledge graph - use descriptive names - (!important!) EntityType helps categorize and filter entities (e.g., PERSON, CONCEPT, PLACE, TECHNOLOGY) - (!important!) Observations provide context and evidence for the entity's existence or properties - (!important!) **Avoid duplicate entities** - check if similar entities exist first using search_nodes </importantNotes> <whenToUseThisTool> - When introducing new concepts, people, or objects into your knowledge base - When processing documents and need to extract and formalize key entities - When building domain-specific knowledge representations - When creating structured data from unstructured text - **Before creating relationships** - ensure both entities exist - When migrating knowledge from other systems into the graph </whenToUseThisTool> <features> - Batch creation of multiple entities in a single operation - Automatic unique ID generation based on entity names - Support for custom entity types for domain categorization - Rich observation arrays for evidence and context - Automatic deduplication (existing entities are ignored) - Metadata support for extensible entity properties </features> <bestPractices> - Use consistent naming conventions (e.g., "John Smith" not "john smith") - Choose meaningful entityTypes that reflect your domain (PERSON, TECHNOLOGY, CONCEPT, etc.) - Include rich observations that provide context and evidence - Group related entity creation for better performance - Use descriptive names that uniquely identify the entity - Consider hierarchical naming for complex domains (e.g., "JavaScript.React.Hooks") </bestPractices> <parameters> - entities: Array of entity objects, each containing: - name: Unique identifier/name for the entity (string, required) - entityType: Category/classification of the entity (string, required) - observations: Array of contextual information and evidence (string[], required) </parameters> <examples> - Adding people: {"entities": [{"name": "Albert Einstein", "entityType": "PERSON", "observations": ["Physicist who developed relativity theory", "Nobel Prize winner in 1921"]}]} - Adding concepts: {"entities": [{"name": "Machine Learning", "entityType": "CONCEPT", "observations": ["Subset of artificial intelligence", "Focuses on learning from data"]}]} - Adding technologies: {"entities": [{"name": "React", "entityType": "TECHNOLOGY", "observations": ["JavaScript library for building UIs", "Developed by Facebook"]}]} </examples>

createRelations

<description> Create multiple relationships between entities in the knowledge graph to establish connections and semantic links. **Critical for building the interconnected structure that makes knowledge graphs powerful.** Relationships define how entities relate to each other, enabling graph traversal and inference. </description> <importantNotes> - (!important!) **Both entities must exist** before creating relationships - entities are auto-created if missing - (!important!) Relationship types should be consistent and meaningful (e.g., IS_A, HAS, USES, IMPLEMENTS) - (!important!) Direction matters: "from" → "to" represents the relationship direction - (!important!) **Avoid redundant relationships** - check existing connections first </importantNotes> <whenToUseThisTool> - When establishing semantic connections between concepts - After creating entities that should be connected - When processing text that implies relationships - When building domain-specific ontologies - When migrating relational data into graph format - **Before querying graph paths** - ensure proper connectivity </whenToUseThisTool> <features> - Batch creation of multiple relationships in one operation - Automatic entity creation if referenced entities don't exist - Support for custom relationship types - Bidirectional relationship awareness - Confidence scoring and metadata support - Deduplication of identical relationships </features> <bestPractices> - Use consistent relationship types across your domain - Choose clear, unambiguous relationship names (IS_A, PART_OF, IMPLEMENTS) - Consider both directions when appropriate (if A USES B, does B DEPEND_ON A?) - Group related relationship creation for better performance - Use verb-like relationship types that read naturally - Document relationship semantics for complex domains </bestPractices> <parameters> - relations: Array of relationship objects, each containing: - from: Name of the source entity (string, required) - to: Name of the target entity (string, required) - relationType: Type/category of the relationship (string, required) </parameters> <examples> - Inheritance: {"relations": [{"from": "Dog", "to": "Animal", "relationType": "IS_A"}]} - Usage: {"relations": [{"from": "React", "to": "JavaScript", "relationType": "USES"}]} - Composition: {"relations": [{"from": "Car", "to": "Engine", "relationType": "HAS"}]} - Multiple: {"relations": [{"from": "Einstein", "to": "Relativity", "relationType": "DEVELOPED"}, {"from": "Relativity", "to": "Physics", "relationType": "PART_OF"}]} </examples>

addObservations

<description> Add new observations to existing entities to continuously enrich their context, evidence, and understanding. **Essential for keeping your knowledge graph current and comprehensive.** Observations provide the factual foundation that supports entity existence and properties. </description> <importantNotes> - (!important!) **Entity must exist** - this tool only adds to existing entities - (!important!) Only new observations are added - duplicates are automatically filtered - (!important!) Observations are cumulative - they build the entity's knowledge base - (!important!) **Be specific and factual** - observations should be verifiable statements </importantNotes> <whenToUseThisTool> - When you discover new information about existing entities - After processing additional documents that mention known entities - When updating entity knowledge from new sources - When refining and expanding entity descriptions - **Before making knowledge-based decisions** - ensure entities have sufficient context - When correcting or expanding incomplete entity information </whenToUseThisTool> <features> - Batch addition of observations to multiple entities - Automatic duplicate filtering - no redundant observations - Supports rich textual observations with context - Maintains observation history and chronology - Integrates with document processing workflows - Enables incremental knowledge building </features> <bestPractices> - Keep observations factual and specific rather than general - Include source context when possible ("According to paper X...") - Use consistent terminology across observations - Add complementary observations that provide different perspectives - Include temporal information when relevant ("As of 2024...") - Group related observations by topic or source </bestPractices> <parameters> - observations: Array of observation addition objects, each containing: - entityName: Name of the existing entity to update (string, required) - contents: Array of new observation strings to add (string[], required) </parameters> <examples> - Scientific updates: {"observations": [{"entityName": "Quantum Computing", "contents": ["IBM achieved quantum advantage in 2024", "Shows promise for cryptography applications"]}]} - Person details: {"observations": [{"entityName": "Marie Curie", "contents": ["First woman to win Nobel Prize", "Won Nobel Prizes in two different sciences"]}]} - Technology evolution: {"observations": [{"entityName": "React", "contents": ["React 18 introduced concurrent features", "Widely adopted for enterprise applications"]}]} </examples>

hybridSearch

<description> Perform sophisticated hybrid search that combines vector similarity with knowledge graph traversal for superior results. **The most powerful search tool in the system** - leverages both semantic similarity and structural relationships. Perfect for complex queries that benefit from both content matching and conceptual connections. </description> <importantNotes> - (!important!) **Hybrid approach is more powerful** than pure vector or graph search alone - (!important!) Graph enhancement finds related concepts even if not directly mentioned - (!important!) Results include similarity scores, graph boost, and hybrid rankings - (!important!) **Best results when knowledge graph is well-populated** with entities and relationships </importantNotes> <whenToUseThisTool> - When you need comprehensive search across documents and knowledge - For complex queries requiring conceptual understanding - When exploring relationships between concepts - **Before making decisions** - to gather all relevant information - When researching topics that span multiple domains - For discovery of implicit connections and patterns </whenToUseThisTool> <features> - Vector similarity search using sentence transformers - Knowledge graph traversal for conceptual enhancement - Hybrid scoring combining multiple relevance signals - Entity association highlighting - Configurable result limits and graph usage - Rich result metadata with multiple ranking scores </features> <bestPractices> - Use natural language queries rather than keywords - Enable graph enhancement for better conceptual coverage - Start with broader queries, then narrow down based on results - Review entity associations to understand why results were selected - Use appropriate limits based on your analysis needs - Combine with other tools for comprehensive knowledge exploration </bestPractices> <parameters> - query: Natural language search query (string, required) - limit: Maximum results to return, default 5 (number, optional) - useGraph: Enable knowledge graph enhancement, default true (boolean, optional) </parameters> <examples> - Conceptual search: {"query": "machine learning applications in healthcare", "limit": 10} - Technical research: {"query": "React performance optimization techniques", "useGraph": true} - Discovery mode: {"query": "Einstein's contributions to modern physics", "limit": 15} - Quick lookup: {"query": "quantum computing advantages", "limit": 3, "useGraph": false} </examples>

embedAllEntities

<description> Generate semantic vector embeddings for all entities in the knowledge graph to enable semantic search. **Essential for upgrading your knowledge graph to use semantic vector search instead of pattern matching.** This tool creates embeddings from entity names, types, and observations for powerful semantic discovery. </description> <importantNotes> - (!important!) **Processes all entities** in the knowledge graph at once - (!important!) **Enables semantic search** - required for vector-based entity discovery - (!important!) **Replaces pattern matching** with intelligent similarity search - (!important!) **Automatic for new entities** - only needed once for existing entities </importantNotes> <whenToUseThisTool> - **After importing existing entities** that don't have embeddings yet - When upgrading from pattern-based to semantic search - When entities have been created without automatic embedding generation - After significant updates to entity observations that require re-embedding - When setting up semantic search capabilities for the first time </whenToUseThisTool> <features> - Batch processing of all entities in the knowledge graph - Generates embeddings from entity names, types, and observations - Creates searchable vector representations using sentence transformers - Enables semantic similarity search across all entities - Automatic handling of embedding generation and storage - Progress reporting for large entity collections </features> <bestPractices> - Run once after importing entities from other systems - Use when transitioning from pattern-based to semantic search - Monitor progress output for large knowledge graphs - Ensure embedding model is properly initialized before running - Consider running after major entity data updates - Use as a one-time setup tool for existing knowledge graphs </bestPractices> <parameters> - No parameters required - processes all entities automatically </parameters> <examples> - Initial setup: {} (no parameters needed) - After import: {} (processes all existing entities) - Post-migration: {} (enables semantic search for imported data) </examples>

rag-memory-mcp

npm version npm downloads GitHub license Platforms GitHub last commit

An advanced MCP server for RAG-enabled memory through a knowledge graph with vector search capabilities. This server extends the basic memory concepts with semantic search, document processing, and hybrid retrieval for more intelligent memory management.

Inspired by: Knowledge Graph Memory Server from the Model Context Protocol project.

Note: This server is designed to run locally alongside MCP clients (e.g., Claude Desktop, VS Code) and requires local file system access for database storage.

✨ Key Features

  • 🧠 Knowledge Graph Memory: Persistent entities, relationships, and observations
  • 🔍 Vector Search: Semantic similarity search using sentence transformers
  • 📄 Document Processing: RAG-enabled document chunking and embedding
  • 🔗 Hybrid Search: Combines vector similarity with graph traversal
  • ⚡ SQLite Backend: Fast local storage with sqlite-vec for vector operations
  • 🎯 Entity Extraction: Automatic term extraction from documents

Tools

This server provides comprehensive memory management through the Model Context Protocol (MCP):

📚 Document Management

  • storeDocument: Store documents with metadata for processing
  • chunkDocument: Create text chunks with configurable parameters
  • embedChunks: Generate vector embeddings for semantic search
  • extractTerms: Extract potential entity terms from documents
  • linkEntitiesToDocument: Create explicit entity-document associations
  • deleteDocuments: Remove documents and associated data
  • listDocuments: View all stored documents with metadata

🧠 Knowledge Graph

  • createEntities: Create new entities with observations and types
  • createRelations: Establish relationships between entities
  • addObservations: Add contextual information to existing entities
  • deleteEntities: Remove entities and their relationships
  • deleteRelations: Remove specific relationships
  • deleteObservations: Remove specific observations from entities

🔍 Search & Retrieval

  • hybridSearch: Advanced search combining vector similarity and graph traversal
  • searchNodes: Find entities by name, type, or observation content
  • openNodes: Retrieve specific entities and their relationships
  • readGraph: Get complete knowledge graph structure

📊 Analytics

  • getKnowledgeGraphStats: Comprehensive statistics about the knowledge base

Usage Scenarios

This server is ideal for scenarios requiring intelligent memory and document understanding:

  • Research and Documentation: Store, process, and intelligently retrieve research papers
  • Knowledge Base Construction: Build interconnected knowledge from documents
  • Conversational Memory: Remember context across chat sessions with semantic understanding
  • Content Analysis: Extract and relate concepts from large document collections
  • Intelligent Assistance: Provide contextually aware responses based on stored knowledge

Client Configuration

This section explains how to configure MCP clients to use the rag-memory-mcp server.

Usage with Claude Desktop / Cursor

Add the following configuration to your claude_desktop_config.json (Claude Desktop) or mcp.json (Cursor):

{
  "mcpServers": {
    "rag-memory": {
      "command": "npx",
      "args": ["-y", "rag-memory-mcp"]
    }
  }
}

With specific version:

{
  "mcpServers": {
    "rag-memory": {
      "command": "npx",
      "args": ["-y", "rag-memory-mcp@1.0.0"]
    }
  }
}

With custom database path:

{
  "mcpServers": {
    "rag-memory": {
      "command": "npx",
      "args": ["-y", "rag-memory-mcp"],
      "env": {
        "MEMORY_DB_PATH": "/path/to/custom/memory.db"
      }
    }
  }
}

Usage with VS Code

Add the following configuration to your User Settings (JSON) file or .vscode/mcp.json:

{
  "mcp": {
    "servers": {
      "rag-memory-mcp": {
        "command": "npx",
        "args": ["-y", "rag-memory-mcp"]
      }
    }
  }
}

Core Concepts

Entities

Entities are the primary nodes in the knowledge graph. Each entity has:

  • A unique name (identifier)
  • An entity type (e.g., "PERSON", "CONCEPT", "TECHNOLOGY")
  • A list of observations (contextual information)

Example:

{
  "name": "Machine Learning",
  "entityType": "CONCEPT",
  "observations": [
    "Subset of artificial intelligence",
    "Focuses on learning from data",
    "Used in recommendation systems"
  ]
}

Relations

Relations define directed connections between entities, describing how they interact:

Example:

{
  "from": "React",
  "to": "JavaScript",
  "relationType": "BUILT_WITH"
}

Observations

Observations are discrete pieces of information about entities:

  • Stored as strings
  • Attached to specific entities
  • Can be added or removed independently
  • Should be atomic (one fact per observation)

Documents & Vector Search

Documents are processed through:

  1. Storage: Raw text with metadata
  2. Chunking: Split into manageable pieces
  3. Embedding: Convert to vector representations
  4. Linking: Associate with relevant entities

This enables hybrid search that combines:

  • Vector similarity (semantic matching)
  • Graph traversal (conceptual relationships)

Environment Variables

  • MEMORY_DB_PATH: Path to the SQLite database file (default: memory.db in the server directory)

Development Setup

This section is for developers looking to modify or contribute to the server.

Prerequisites

  • Node.js: Check package.json for version compatibility
  • npm: Used for package management

Installation (Developers)

  1. Clone the repository:
git clone https://github.com/ttommyth/rag-memory-mcp.git
cd rag-memory-mcp
  1. Install dependencies:
npm install

Building

npm run build

Running (Development)

npm run watch  # For development with auto-rebuild

Development Commands

  • Build: npm run build
  • Watch: npm run watch
  • Prepare: npm run prepare

Usage Example

Here's a typical workflow for building and querying a knowledge base:

// 1. Store a document
await storeDocument({
  id: "ml_intro",
  content: "Machine learning is a subset of AI...",
  metadata: { type: "educational", topic: "ML" }
});

// 2. Process the document
await chunkDocument({ documentId: "ml_intro" });
await embedChunks({ documentId: "ml_intro" });

// 3. Extract and create entities
const terms = await extractTerms({ documentId: "ml_intro" });
await createEntities({
  entities: [
    {
      name: "Machine Learning",
      entityType: "CONCEPT",
      observations: ["Subset of artificial intelligence", "Learns from data"]
    }
  ]
});

// 4. Search with hybrid approach
const results = await hybridSearch({
  query: "artificial intelligence applications",
  limit: 10,
  useGraph: true
});

System Prompt Suggestions

For optimal memory utilization, consider using this system prompt:

You have access to a RAG-enabled memory system with knowledge graph capabilities. Follow these guidelines:

1. **Information Storage**:
   - Store important documents using the document management tools
   - Create entities for people, concepts, organizations, and technologies
   - Build relationships between related concepts

2. **Information Retrieval**:
   - Use hybrid search for comprehensive information retrieval
   - Leverage both semantic similarity and graph relationships
   - Search entities before creating duplicates

3. **Memory Maintenance**:
   - Add observations to enrich entity context
   - Link documents to relevant entities for better discoverability
   - Use statistics to monitor knowledge base growth

4. **Processing Workflow**:
   - Store → Chunk → Embed → Extract → Link
   - Always process documents completely for best search results

Contributing

Contributions are welcome! Please follow standard development practices and ensure all tests pass before submitting pull requests.

License

This project is licensed under the MIT License. See the LICENSE file for details.


Built with: TypeScript, SQLite, sqlite-vec, Hugging Face Transformers, Model Context Protocol SDK

Alternatives

Related Skills

Browse all skills
langchain

Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.

15
memory-sync

Scrape and analyze OpenClaw JSONL session logs to reconstruct and backfill agent memory files. Use when: (1) Memory appears incomplete after model switches, (2) Verifying memory coverage, (3) Reconstructing lost memory, (4) Automated daily memory sync via cron/heartbeat. Supports simple extraction and LLM-based narrative summaries with automatic secret sanitization.

1
agentdb-memory-patterns

"Implement persistent memory patterns for AI agents using AgentDB. Includes session memory, long-term storage, pattern learning, and context management. Use when building stateful agents, chat systems, or intelligent assistants."

1
agent-memory-systems

Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector stores), and the cognitive architectures that organize them. Key insight: Memory isn't just storage - it's retrieval. A million stored facts mean nothing if you can't find the right one. Chunking, embedding, and retrieval strategies determine whether your agent remembers or forgets. The field is fragm

1
slack-memory-cleanup

Memory cleanup and organization skill for AI employees. Provides guidelines for detecting duplicates, fixing misclassified files, and removing stale information from memory storage.

0
agent-memory-improved

Run a local Agent Memory Service for persistent self-improvement with proper Ed25519 cryptography. Fixed signature implementation for reliable memory storage and retrieval.

0