Basic Memory

Basic Memory

Official
basicmachines-co

A knowledge management system that lets LLMs build and query a persistent graph of interconnected notes stored as local Markdown files. Features semantic search and can visualize connections between concepts.

Knowledge management system that builds a persistent semantic graph in markdown, locally.

2,6061,019 views174Local (stdio)

What it does

  • Write and update markdown notes with semantic content
  • Search notes by meaning using vector similarity and keywords
  • Build context from previous conversations to continue discussions
  • Create visual concept maps with Obsidian canvas
  • Track recent activity across knowledge projects
  • Read and manage file contents by path or permalink

Best for

Researchers building persistent knowledge basesWriters organizing interconnected ideas and referencesKnowledge workers maintaining long-term project memory
Local-first storage in Markdown filesSemantic vector search with FastEmbedOptional cloud sync across devices

About Basic Memory

Basic Memory is an official MCP server published by basicmachines-co that provides AI assistants with tools and capabilities via the Model Context Protocol. Basic Memory is a knowledge management system that builds a persistent semantic graph in markdown, locally and securely. It is categorized under ai ml, productivity. This server exposes 17 tools that AI clients can invoke during conversations and coding sessions.

How to install

You can install Basic Memory in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Basic Memory is released under the AGPL-3.0 license.

Tools (17)

delete_note

Delete a note by title or permalink

read_content

Read a file's raw content by path or permalink

build_context

Build context from a memory:// URI to continue conversations naturally. Use this to follow up on previous discussions or explore related topics. Memory URL Format: - Use paths like "folder/note" or "memory://folder/note" - Pattern matching: "folder/*" matches all notes in folder - Valid characters: letters, numbers, hyphens, underscores, forward slashes - Avoid: double slashes (//), angle brackets (<>), quotes, pipes (|) - Examples: "specs/search", "projects/basic-memory", "notes/*" Timeframes support natural language like: - "2 days ago", "last week", "today", "3 months ago" - Or standard formats like "7d", "24h"

recent_activity

Get recent activity for a project or across all projects. Timeframe supports natural language formats like: - "2 days ago" - "last week" - "yesterday" - "today" - "3 weeks ago" Or standard formats like "7d"

search_notes

Search across all content in the knowledge base with advanced syntax support.

License: AGPL v3 PyPI version Python 3.12+ Tests Ruff README image README image

🚀 Basic Memory Cloud is Live!

  • Cross-device and multi-platform support is here. Your knowledge graph now works on desktop, web, and mobile.
  • Cloud is optional. The local-first open-source workflow continues as always.
  • OSS discount: use code BMFOSS for 20% off for 3 months.

Sign up now →

with a 7 day free trial

Basic Memory

Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to enable any compatible LLM to read and write to your local knowledge base.

What's New in v0.19.0

  • Semantic Vector Search — find notes by meaning, not just keywords. Combines full-text and vector similarity for hybrid search with FastEmbed embeddings.
  • Schema System — infer, validate, and diff the structure of your knowledge base with schema_infer, schema_validate, and schema_diff tools.
  • Per-Project Cloud Routing — route individual projects through the cloud while others stay local, using API key authentication (basic-memory project set-cloud).
  • FastMCP 3.0 — upgraded to FastMCP 3.0 with tool annotations for better client integration.
  • CLI Overhaul — JSON output mode (--json) for scripting, workspace-aware commands, and an htop-inspired project dashboard.
  • Smarter Editing — edit_note append/prepend auto-creates notes if they don't exist; write_note has an overwrite guard to prevent accidental data loss.
  • Richer Search Results — matched chunk text returned in search results for better context.

See the full CHANGELOG for details.

Pick up your conversation right where you left off

  • AI assistants can load context from local files in a new conversation
  • Notes are saved locally as Markdown files in real time
  • No project knowledge or special prompting required

https://github.com/user-attachments/assets/a55d8238-8dd0-454a-be4c-8860dbbd0ddc

Quick Start

# Install with uv (recommended)
uv tool install basic-memory

# Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)
# Add this to your config:
{
  "mcpServers": {
    "basic-memory": {
      "command": "uvx",
      "args": [
        "basic-memory",
        "mcp"
      ]
    }
  }
}
# Now in Claude Desktop, you can:
# - Write notes with "Create a note about coffee brewing methods"
# - Read notes with "What do I know about pour over coffee?"
# - Search with "Find information about Ethiopian beans"

You can view shared context via files in ~/basic-memory (default directory location).

Why Basic Memory?

Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:

  • Chat histories capture conversations but aren't structured knowledge
  • RAG systems can query documents but don't let LLMs write back
  • Vector databases require complex setups and often live in the cloud
  • Knowledge graphs typically need specialized tools to maintain

Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can read and write to. The key advantages:

  • Local-first: All knowledge stays in files you control
  • Bi-directional: Both you and the LLM read and write to the same files
  • Structured yet simple: Uses familiar Markdown with semantic patterns
  • Traversable knowledge graph: LLMs can follow links between topics
  • Standard formats: Works with existing editors like Obsidian
  • Lightweight infrastructure: Just local files indexed in a local SQLite database

With Basic Memory, you can:

  • Have conversations that build on previous knowledge
  • Create structured notes during natural conversations
  • Have conversations with LLMs that remember what you've discussed before
  • Navigate your knowledge graph semantically
  • Keep everything local and under your control
  • Use familiar tools like Obsidian to view and edit notes
  • Build a personal knowledge base that grows over time
  • Sync your knowledge to the cloud with bidirectional synchronization
  • Authenticate and manage cloud projects with subscription validation
  • Mount cloud storage for direct file access

How It Works in Practice

Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:

  1. Start by chatting normally:
I've been experimenting with different coffee brewing methods. Key things I've learned:

- Pour over gives more clarity in flavor than French press
- Water temperature is critical - around 205°F seems best
- Freshly ground beans make a huge difference

... continue conversation.

  1. Ask the LLM to help structure this knowledge:
"Let's write a note about coffee brewing methods."

LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):

---
title: Coffee Brewing Methods
permalink: coffee-brewing-methods
tags:
- coffee
- brewing
---

# Coffee Brewing Methods

## Observations

- [method] Pour over provides more clarity and highlights subtle flavors
- [technique] Water temperature at 205°F (96°C) extracts optimal compounds
- [principle] Freshly ground beans preserve aromatics and flavor

## Relations

- relates_to [[Coffee Bean Origins]]
- requires [[Proper Grinding Technique]]
- affects [[Flavor Extraction]]

The note embeds semantic content and links to other topics via simple Markdown formatting.

  1. You see this file on your computer in real time in the current project directory (default ~/$HOME/basic-memory).
  • Realtime sync can be enabled via running basic-memory sync --watch
  1. In a chat with the LLM, you can reference a topic:
Look at `coffee-brewing-methods` for context about pour over coffee

The LLM can now build rich context from the knowledge graph. For example:

Following relation 'relates_to [[Coffee Bean Origins]]':
- Found information about Ethiopian Yirgacheffe
- Notes on Colombian beans' nutty profile
- Altitude effects on bean characteristics

Following relation 'requires [[Proper Grinding Technique]]':
- Burr vs. blade grinder comparisons
- Grind size recommendations for different methods
- Impact of consistent particle size on extraction

Each related document can lead to more context, building a rich semantic understanding of your knowledge base.

This creates a two-way flow where:

  • Humans write and edit Markdown files
  • LLMs read and write through the MCP protocol
  • Sync keeps everything consistent
  • All knowledge stays in local files.

Technical Implementation

Under the hood, Basic Memory:

  1. Stores everything in Markdown files
  2. Uses a SQLite database for searching and indexing
  3. Extracts semantic meaning from simple Markdown patterns
    • Files become Entity objects
    • Each Entity can have Observations, or facts associated with it
    • Relations connect entities together to form the knowledge graph
  4. Maintains the local knowledge graph derived from the files
  5. Provides bidirectional synchronization between files and the knowledge graph
  6. Implements the Model Context Protocol (MCP) for AI integration
  7. Exposes tools that let AI assistants traverse and manipulate the knowledge graph
  8. Uses memory:// URLs to reference entities across tools and conversations

The file format is just Markdown with some simple markup:

Each Markdown file has:

Frontmatter

title: <Entity title>
type: <The type of Entity> (e.g. note)
permalink: <a uri slug>

- <optional metadata> (such as tags) 

Observations

Observations are facts about a topic. They can be added by creating a Markdown list with a special format that can reference a category, tags using a "#" character, and an optional context.

Observation Markdown format:

- [category] content #tag (optional context)

Examples of observations:

- [method] Pour over extracts more floral notes than French press
- [tip] Grind size should be medium-fine for pour over #brewing
- [preference] Ethiopian beans have bright, fruity flavors (especially from Yirgacheffe)
- [fact] Lighter roasts generally contain more caffeine than dark roasts
- [experiment] Tried 1:15 coffee-to-water ratio with good results
- [resource] James Hoffman's V60 technique on YouTube is excellent
- [question] Does water temperature affect extraction of different compounds differently?
- [note] My favorite local shop uses a 30-second bloom time

Relations

Relations are links to other topics. They define how entities


README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
context-optimizer

Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.

22
python-performance-optimization

Profile and optimize Python code using cProfile, memory profilers, and performance best practices. Use when debugging slow Python code, optimizing bottlenecks, or improving application performance.

16
langchain

Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.

15
reverse-engineering-tools

Guide for reverse engineering tools and techniques used in game security research. Use this skill when working with debuggers, disassemblers, memory analysis tools, binary analysis, or decompilers for game security research.

15
claude-md-improver

Audit and improve CLAUDE.md files in repositories. Use when user asks to check, audit, update, improve, or fix CLAUDE.md files. Scans for all CLAUDE.md files, evaluates quality against templates, outputs quality report, then makes targeted updates. Also use when the user mentions "CLAUDE.md maintenance" or "project memory optimization".

10
memory

Query and manage project memory to understand past decisions, architectural choices, and coding patterns before making changes. Use this skill when starting new tasks or when you need context about existing code.

9