
context-distill
Compress command output into precise summaries for LLMs. Reduce tokens, cut noise, keep signal. CLI + MCP server for eff
A Go tool that distills verbose command output before it reaches a paid LLM, compressing logs, test results, diffs, and status checks into structured answers to save tokens and reduce noise.
About context-distill
context-distill is a community-built MCP server published by jcastilloa that provides AI assistants with tools and capabilities via the Model Context Protocol. Compress command output into precise summaries for LLMs. Reduce tokens, cut noise, keep signal. CLI + MCP server for eff It is categorized under search web. This server exposes 2 tools that AI clients can invoke during conversations and coding sessions.
How to install
You can install context-distill in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
context-distill is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (2)
Compresses full command output to answer a single, explicit question, reducing verbose CLI output to structured answers
Compares two consecutive snapshots and returns only the relevant delta between previous and current states
context-distill
A Go tool that distills command output before it reaches a paid LLM. Available as a Skill (recommended), a standalone CLI, and an MCP server. Inspired by the distill CLI and built with hexagonal architecture, dependency injection, and TDD.
Overview
context-distill exposes two distillation operations accessible in three ways:
| Mode | Best for | How it works |
|---|---|---|
| Skill ⭐ (recommended) | Any agent that can read markdown and run shell commands | The agent reads a SKILL.md file from its own skills directory and learns when and how to invoke the CLI. Zero config on the agent side. |
| CLI | Local scripts, CI pipelines, shell-capable agents | Direct subcommands: context-distill distill_batch, context-distill distill_watch. |
| MCP | Agents/clients with native MCP support (Claude Desktop, Cursor, Codex…) | Runs as an MCP server over stdio transport. |
| Operation | Purpose |
|---|---|
distill_batch | Compresses full command output to answer a single, explicit question. |
distill_watch | Compares two consecutive snapshots and returns only the relevant delta. |
All three modes share the same underlying use cases, validation rules, and output behavior — only the invocation method differs.
It also provides:
- LLM provider configuration via YAML and environment variables.
- An interactive terminal UI for first-time setup (
--config-ui). - Support for Ollama and any OpenAI-compatible provider.
Why Skill Mode Is Recommended
| Skill | CLI | MCP | |
|---|---|---|---|
| Agent config required | None — just drop SKILL.md in the agent's skills directory | Agent must know how to run shell commands | Register server in client config |
| Works across agents | ✅ Any agent that reads markdown | ✅ Any agent that runs shell | ⚠️ Only MCP-compatible clients |
| Setup complexity | Copy one file per agent | Install binary | Install binary + register transport |
| Portability | Works in any repo | Works in any shell | Tied to MCP client config |
Skill mode works because modern coding agents (Codex, Claude Code, Cursor, Aider, OpenCode…) already know how to read project documentation and execute shell commands. A SKILL.md file teaches the agent when to distill and how to call the CLI — no protocol integration needed.
Features
- Triple interface — Skill file for zero-config agent adoption + CLI for direct shell use + MCP tools for protocol-native clients.
- Hexagonal architecture —
distill/domain,distill/application,platform/*. - Dependency injection via
sarulabs/di. - Config management with
viper+.env. - Provider-specific validation at config time.
- Interactive setup UI (
--config-ui). - Unit, integration, and optional live tests.
Requirements
- Go 1.26+
- Make (recommended)
If you prefer not to compile, you can install a prebuilt binary from GitHub Releases (see below).
Installation
Option A: Build from source
make build
The binary is placed at ./bin/context-distill.
To install it into your PATH:
make install
# installs to ~/.local/bin/context-distill
Option B: Prebuilt binary (no build required)
Linux / macOS:
# Latest release
curl -fsSL https://raw.githubusercontent.com/jcastilloa/context-distill/master/scripts/install.sh | sh
# Specific version
curl -fsSL https://raw.githubusercontent.com/jcastilloa/context-distill/master/scripts/install.sh | VERSION=vX.Y.Z sh
Windows (PowerShell):
# Latest release
iwr https://raw.githubusercontent.com/jcastilloa/context-distill/master/scripts/install.ps1 -UseBasicParsing | iex
# Specific version
$env:VERSION='vX.Y.Z'; iwr https://raw.githubusercontent.com/jcastilloa/context-distill/master/scripts/install.ps1 -UseBasicParsing | iex
Installer environment variables:
| Variable | Default |
|---|---|
REPO | jcastilloa/context-distill |
SERVICE_NAME | context-distill |
INSTALL_DIR | ~/.local/bin (Linux/macOS) · %LOCALAPPDATA%\context-distill\bin (Windows) |
VERSION | Latest release tag |
Makefile Targets
make help
| Target | Description |
|---|---|
make build | Build the binary to ./bin/context-distill |
make install | Install binary to ~/.local/bin/context-distill |
make clean | Remove ./bin |
Quick Start
1. Configure the provider (all modes)
context-distill --config-ui
2. Verify it works
echo "PASS: TestA, PASS: TestB, FAIL: TestC - expected 4 got 5" | context-distill distill_batch --question "Did tests pass? Return only PASS or FAIL. If FAIL, list failing test names."
context-distill distill_watch --question "What changed? Return one short sentence." --previous-cycle "services: api=OK, db=OK, cache=OK" --current-cycle "services: api=OK, db=FAIL, cache=OK"
If both return a distilled answer, you are ready.
3. Choose your mode
⭐ Skill mode (recommended)
Copy SKILL.md into the appropriate agent skills directory (see Skill Setup). Your agent will read it automatically and start distilling.
CLI mode
Use the subcommands directly in scripts or agent shell calls:
# Pipe (preferred)
echo "data" | context-distill distill_batch --question "..."
# Explicit flag
context-distill distill_batch --question "..." --input "data"
# Explicit stdin marker
echo "data" | context-distill distill_batch --question "..." --input -
MCP mode
context-distill --transport stdio
Then register the server in your MCP client (see MCP Client Registration).
Skill Setup (Recommended)
What is a Skill?
A skill is a SKILL.md file placed inside an agent's skills directory. The agent discovers it, reads the instructions, and learns when to activate and what commands to run. No protocol config, no server registration.
Where to install the Skill
Each agent has its own skills directory. Install the SKILL.md at project level, global level, or both:
| Agent | Project-level path | Global path |
|---|---|---|
| Claude Code | .claude/skills/context-distill/SKILL.md | ~/.claude/skills/context-distill/SKILL.md |
| Codex | .codex/skills/context-distill/SKILL.md | ~/.codex/skills/context-distill/SKILL.md |
| OpenCode | .opencode/skills/context-distill/SKILL.md | ~/.opencode/skills/context-distill/SKILL.md |
| Cursor | .cursor/skills/context-distill/SKILL.md | ~/.cursor/skills/context-distill/SKILL.md |
Project-level (recommended for teams): every agent working on the repo picks it up automatically.
Global (recommended for personal use): available in every project without per-repo setup.
Quick install example (Claude Code, project-level)
mkdir -p .claude/skills/context-distill
cp SKILL.md .claude/skills/context-distill/SKILL.md
Quick install example (all agents, global)
for agent in .claude .codex .opencode .cursor; do
mkdir -p ~/"$agent"/skills/context-distill
cp SKILL.md ~/"$agent"/skills/context-distill/SKILL.md
done
SKILL.md
---
name: context-distill
description: >
Distills verbose command output (logs, test results, diffs, status checks) into
short, structured answers before sending them to an LLM. Saves tokens, reduces
noise, and keeps context windows clean.
Use before sending command output longer than 5–8 lines, after tests/builds/linters/git
commands/docker logs, when comparing watch-mode snapshots, or whenever unsure whether
to distill.
---
Distill verbose CLI output before passing to LLM. Keep signal. Drop noise.
## Activation
Use BEFORE sending any command output longer than 5–8 lines to LLM.
Use AFTER:
- tests
- builds
- linters
- git commands
- docker logs
- any verbose CLI tool
Use when:
- comparing two snapshots of same source in watch mode
- unsure whether to distill
Default rule: **always distill**. Unnecessary distill cost ≈ 0. Flooding context expensive.
## Skip
Do not use when:
- output is ≤ 5–8 lines and already human-readable
- exact raw bytes required (audit / compliance / binary integrity)
- interactive terminal debugging needs character-by-character flow
## Commands
### Distill full output
```bash
# Pipe — preferred
<command> | context-distill distill_batch --question "<question with output contract>"
# Explicit flag
context-distill distill_batch --question "<question with output contract>" --input "<raw output>"
# Explicit stdin marker
<command> | context-distill distill_batch --question "<question with output contract>" --input -
```
### Distill delta between two snapshots
```bash
context-distill distill_watch \
--question "<question with output contract>" \
--previous-cycle "<snapshot T-1>" \
--current-cycle "<snapshot T>"
```
## Rules
1. **Every call MUST include an output contract in `--question`.**
Say exact return format:
- `PASS or FAIL`
- `valid JSON {severity, file, message}`
- `filenames, one per line`
2. **One task per call.**
Do not mix unrelated questions.
3. **Prefer machine-checkable formats.**
Use:
- PASS/FAIL
- JSON
- one-item-per-line
## Examples
| Source command | Question |
|-------------------|```-------|
| `go test ./...` | `"Did all tests pass? PASS or FAIL. If FAIL, list failing test names, one per line."` |
| `git diff` | `"List only changed file paths, one per line."` |
| CI / build logs | `"Return JSON array: [{severity, file, message}]."` |
| `docker logs` | `"Summarise errors only. One bullet per distinct error."` |
| `find` / `ls -lR` | `"Return only *.go paths, one per line."` |
###
---
*README truncated. [View full README on GitHub](https://github.com/jcastilloa/context-distill).*
Alternatives
Related Skills
Browse all skillsOfficial Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation
Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards — purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.
Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.
Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research
Comprehensive research, analysis, and content extraction system. USE WHEN user says 'research' (ANY form - this is the MANDATORY trigger), 'do research', 'extensive research', 'quick research', 'minor research', 'research this', 'find information', 'investigate', 'extract wisdom', 'extract alpha', 'analyze content', 'can't get this content', 'use fabric', OR requests any web/content research. Supports three research modes (quick/standard/extensive), deep content analysis, intelligent retrieval, and 242+ Fabric patterns. NOTE: For due diligence, OSINT, or background checks, use OSINT skill instead.