tiered-memory
Three-tier memory system (hot/warm/cold) for OpenClaw agents. Replaces growing MEMORY.md with fixed-size 5KB hot memory, 50KB scored warm tier, and unlimited Turso cold archive. Use for memory management, consolidation, and intelligent retrieval.
Install
mkdir -p .claude/skills/tiered-memory && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6677" && unzip -o skill.zip -d .claude/skills/tiered-memory && rm skill.zipInstalls to .claude/skills/tiered-memory
About this skill
Tiered Memory System v2.2.0
A mind that remembers everything is as useless as one that remembers nothing. The art is knowing what to keep. 🧠
EvoClaw-compatible three-tier memory system inspired by human cognition and PageIndex tree retrieval.
What's New in v2.2.0
🔄 Automatic Daily Note Ingestion
- Consolidation (
daily/monthly/fullmodes) now auto-runsingest-daily - Bridges
memory/YYYY-MM-DD.mdfiles → tiered memory system - No more manual ingestion required — facts flow automatically
- Fixes the "two disconnected data paths" problem
What's New in v2.1.0
🎯 Structured Metadata Extraction
- Automatic extraction of URLs, shell commands, and file paths from facts
- Preserved during distillation and consolidation
- Searchable by URL fragment
✅ Memory Completeness Validation
- Check daily notes for missing URLs, commands, and next steps
- Proactive warnings for incomplete information
- Actionable suggestions for improvement
🔍 Enhanced Search
- Search facts by URL fragment
- Get all stored URLs from warm memory
- Metadata-aware fact storage
🛡️ URL Preservation
- URLs explicitly preserved during LLM distillation
- Fallback metadata extraction if LLM misses them
- Command-line support for adding metadata manually
Architecture
┌─────────────────────────────────────────────────────┐
│ AGENT CONTEXT (~8-15KB) │
│ │
│ ┌──────────┐ ┌────────────────────────────────┐ │
│ │ Tree │ │ Retrieved Memory Nodes │ │
│ │ Index │ │ (on-demand, 1-3KB) │ │
│ │ (~2KB) │ │ │ │
│ │ │ │ Fetched per conversation │ │
│ │ Always │ │ based on tree reasoning │ │
│ │ loaded │ │ │ │
│ └────┬─────┘ └────────────────────────────────┘ │
│ │ │
└───────┼─────────────────────────────────────────────┘
│
│ LLM-powered tree search
│
┌───────▼─────────────────────────────────────────────┐
│ MEMORY TIERS │
│ │
│ 🔴 HOT (5KB) 🟡 WARM (50KB) 🟢 COLD (∞) │
│ │
│ Core memory Scored facts Full archive │
│ - Identity - 30-day - Turso DB │
│ - Owner profile - Decaying - Queryable │
│ - Active context - On-device - 10-year │
│ - Lessons (20 max) │
│ │
│ Always in Retrieved via Retrieved via │
│ context tree search tree search │
└─────────────────────────────────────────────────────┘
Design Principles
From Human Memory
- Consolidation — Short-term → long-term happens during consolidation cycles
- Relevance Decay — Unused memories fade; accessed memories strengthen
- Strategic Forgetting — Not remembering everything is a feature
- Hierarchical Organization — Navigate categories, not scan linearly
From PageIndex
- Vectorless Retrieval — LLM reasoning instead of embedding similarity
- Tree-Structured Index — O(log n) navigation, not O(n) scan
- Explainable Results — Every retrieval traces a path through categories
- Reasoning-Based Search — "Why relevant?" not "how similar?"
Cloud-First (EvoClaw)
- Device is replaceable — Soul lives in cloud (Turso)
- Critical sync — Hot + tree sync after every conversation
- Disaster recovery — Full restore in <2 minutes
- Multi-device — Same agent across phone/desktop/embedded
Memory Tiers
🔴 Hot Memory (5KB max)
Purpose: Core identity and active context, always in agent's context window.
Structure:
{
"identity": {
"agent_name": "Agent",
"owner_name": "User",
"owner_preferred_name": "User",
"relationship_start": "2026-01-15",
"trust_level": 0.95
},
"owner_profile": {
"personality": "technical, direct communication",
"family": ["Sarah (wife)", "Luna (daughter, 3yo)"],
"topics_loved": ["AI architecture", "blockchain", "system design"],
"topics_avoid": ["small talk about weather"],
"timezone": "Australia/Sydney",
"work_hours": "9am-6pm"
},
"active_context": {
"projects": [
{
"name": "EvoClaw",
"description": "Self-evolving agent framework",
"status": "Active - BSC integration for hackathon"
}
],
"events": [
{"text": "Hackathon deadline Feb 15", "timestamp": 1707350400}
],
"tasks": [
{"text": "Deploy to BSC testnet", "status": "pending", "timestamp": 1707350400}
]
},
"critical_lessons": [
{
"text": "Always test on testnet before mainnet",
"category": "blockchain",
"importance": 0.9,
"timestamp": 1707350400
}
]
}
Auto-pruning:
- Lessons: Max 20, removes lowest-importance when full
- Events: Keeps last 10 only
- Tasks: Max 10 pending
- Total size: Hard limit at 5KB, progressively prunes if exceeded
Generates: MEMORY.md — auto-rebuilt from structured hot state
🟡 Warm Memory (50KB max, 30-day retention)
Purpose: Recent distilled facts with decay scoring.
Entry format:
{
"id": "abc123def456",
"text": "Decided to use zero go-ethereum deps for EvoClaw to keep binary small",
"category": "projects/evoclaw/architecture",
"importance": 0.8,
"created_at": 1707350400,
"access_count": 3,
"score": 0.742,
"tier": "warm"
}
Scoring:
score = importance × recency_decay(age) × reinforcement(access_count)
recency_decay(age) = exp(-age_days / 30)
reinforcement(access) = 1 + 0.1 × access_count
Tier classification:
score >= 0.7→ Hot (promote to hot state)score >= 0.3→ Warm (keep)score >= 0.05→ Cold (archive)score < 0.05→ Frozen (delete after retention period)
Eviction triggers:
- Age > 30 days AND score < 0.3
- Total warm size > 50KB (evicts lowest-scored)
- Manual consolidation
🟢 Cold Memory (Unlimited, Turso)
Purpose: Long-term archive, queryable but never bulk-loaded.
Schema:
CREATE TABLE cold_memories (
id TEXT PRIMARY KEY,
agent_id TEXT NOT NULL,
text TEXT NOT NULL,
category TEXT NOT NULL,
importance REAL DEFAULT 0.5,
created_at INTEGER NOT NULL,
access_count INTEGER DEFAULT 0
);
CREATE TABLE critical_state (
agent_id TEXT PRIMARY KEY,
data TEXT NOT NULL, -- {hot_state, tree_nodes, timestamp}
updated_at INTEGER NOT NULL
);
Retention: 10 years (configurable) Cleanup: Monthly consolidation removes frozen entries older than retention period
Tree Index
Purpose: Hierarchical category map for O(log n) retrieval.
Constraints:
- Max 50 nodes
- Max depth 4 levels
- Max 2KB serialized
- Max 10 children per node
Example:
Memory Tree Index
==================================================
📂 Root (warm:15, cold:234)
📁 owner — Owner profile and preferences
Memories: warm=5, cold=89
📁 projects — Active projects
Memories: warm=8, cold=67
📁 projects/evoclaw — EvoClaw framework
Memories: warm=6, cold=45
📁 projects/evoclaw/bsc — BSC integration
Memories: warm=3, cold=12
📁 technical — Technical setup and config
Memories: warm=2, cold=34
📁 lessons — Learned lessons and rules
Memories: warm=0, cold=44
Nodes: 7/50
Size: 1842 / 2048 bytes
Operations:
--add PATH DESC— Add category node--remove PATH— Remove node (only if no data)--prune— Remove dead nodes (no activity in 60+ days)--show— Pretty-print tree
Distillation Engine
Purpose: Three-stage compression of conversations.
Pipeline:
Raw conversation (500B)
↓ Stage 1→2: Extract structured info
Distilled fact (80B)
↓ Stage 2→3: Generate one-line summary
Core summary (20B)
Stage 1→2: Raw → Distilled
Input: Raw conversation text Output: Structured JSON
{
"fact": "User decided to use raw JSON-RPC for BSC to avoid go-ethereum dependency",
"emotion": "determined",
"people": ["User"],
"topics": ["blockchain", "architecture", "dependencies"],
"actions": ["decided to use raw JSON-RPC", "avoid go-ethereum"],
"outcome": "positive"
}
Modes:
rule: Regex/heuristic extraction (fast, no LLM)llm: LLM-powered extraction (accurate, requires endpoint)
Usage:
# Rule-based (default)
distiller.py --text "Had a productive chat about the BSC integration..." --mode rule
# LLM-powered
distiller.py --text "..." --mode llm --llm-endpoint http://localhost:8080/complete
# With core summary
distiller.py --text "..." --mode rule --core-summary
Stage 2→3: Distilled → Core Summary
Purpose: One-line summary for tree index
Example:
Distilled: {
"fact": "User decided raw JSON-RPC for BSC, no go-ethereum",
"outcome": "positive"
}
Core summary: "BSC integration: raw JSON-RPC (no deps)"
Target: <30 bytes
LLM-Powered Tree Search
Purpose: Semantic search through tree structure using LLM reasoning.
How it works:
- Build prompt with tree structure + query
- LLM reasons about which categories are relevant
- Returns category paths with relevance scores
- Fetches memories from those categories
Example:
Query: "What did we decide about the hackathon deadline?"
Keyword search returns:
projects/evoclaw(0.8)technical/deployment(0.4)
LLM search reasons:
projects/evoclaw/bsc(0.95) — "BSC integration for hackathon"active_context/events(0.85) — "Deadline mentioned here"
LLM prompt template:
You are a memory retrieval system. Given a memory tree index and a query,
identify which categories are relevant.
Memory Tree Index:
projects/evoclaw — EvoClaw framework (w
---
*Content truncated.*
More by openclaw
View all skills by openclaw →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversDesktop Commander MCP unifies code management with advanced source control, git, and svn support—streamlining developmen
Basic Memory is a knowledge management system that builds a persistent semantic graph in markdown, locally and securely.
Context Portal: Manage project memory with a database-backed system for decisions, tracking, and semantic search via a k
AgentKits Memory — local, persistent memory for AI coding assistants. On-premise SQLite with optional vector search for
Enhance persistent memory with RAG Memory, merging Pinecone vector database and vector search with knowledge graph relat
AI Memory is a production-ready vector database server that manages and retrieves contextual knowledge with advanced sem
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.