README.md

May 7, 2026 · View on GitHub

  ██╗     ███████╗ █████╗ ███╗   ██╗     ██████╗████████╗██╗  ██╗
  ██║     ██╔════╝██╔══██╗████╗  ██║    ██╔════╝╚══██╔══╝╚██╗██╔╝
  ██║     █████╗  ███████║██╔██╗ ██║    ██║        ██║    ╚███╔╝ 
  ██║     ██╔══╝  ██╔══██║██║╚██╗██║    ██║        ██║    ██╔██╗ 
  ███████╗███████╗██║  ██║██║ ╚████║    ╚██████╗   ██║   ██╔╝ ██╗
  ╚══════╝╚══════╝╚═╝  ╚═╝╚═╝  ╚═══╝     ╚═════╝   ╚═╝   ╚═╝  ╚═╝
             Context Runtime for AI Agents

The context layer for AI coding agents

Reduce token waste in Cursor, Claude Code, Copilot, Windsurf, Codex, Gemini & more by 60–95% (up to 99% on cached reads)
Shell Hook + MCP Server · 56 tools · 10 read modes · 95+ patterns · Single Rust binary

CI Security crates.io Downloads npm AUR Pi.dev License Discord X/Twitter Opt-in Telemetry

Website · Docs · Install · Demo · Benchmarks · Cookbook · Security · Changelog · Discord


lean-ctx is a local-first context runtime that compresses file reads + shell output before they reach the LLM. Cached re-reads drop to ~13 tokens.

See it in action:

Map-mode file read + compressed git output demo
Read + Shell
Map-mode reads + compressed CLI output
lean-ctx gain live dashboard demo
Gain (live)
Tokens + USD savings in real time
lean-ctx benchmark report demo
Benchmark proof
Measure compression by language + mode

All GIFs are generated from reproducible VHS tapes in demo/.

What it does

  • File reads (MCP): cached + mode-aware reads (full, map, signatures, diff, …) with graph-aware related files hints
  • Shell output (hook): compresses noisy CLI output via 95+ patterns (git, npm, cargo, docker, …)
  • Graph-Powered Intelligence: multi-edge Property Graph (imports, calls, exports, type_ref) with weighted impact analysis, hybrid search (BM25 + embeddings + graph proximity via RRF), and incremental git-diff updates
  • PR Context Packs: lean-ctx pack --pr builds a PR-ready context pack (changed files, related tests, impact, artifacts)
  • Context Packages: lean-ctx pack create bundles Knowledge + Graph + Session + Gotchas into portable .lctxpkg files — share context across projects/teams with SHA-256 integrity, auto-load on session start, and smart merge (dedup facts, overlay graph)
  • Session memory (CCP): persist task/facts/decisions across chats with structured recovery queries surviving compaction
  • HTTP mode: lean-ctx serve for Streamable HTTP MCP + /v1/tools/call (used by the Cookbook + SDK)

How it works (30 seconds)

AI tool  →  (MCP tools + shell commands)  →  lean-ctx  →  your repo + CLI
  • MCP server: exposes ctx_* tools (read modes, caching, deltas, search, memory, multi-agent)
  • Shell hook: transparently compresses common commands so the LLM sees less noise
  • Property Graph: multi-edge code graph powers impact analysis, related file discovery, and search ranking
  • CCP: persists session state with structured recovery queries so long-running work doesn’t “cold start” every chat

Get started (60 seconds)

# 1) Install (pick one)
curl -fsSL https://leanctx.com/install.sh | sh      # universal (no Rust needed)
brew tap yvgude/lean-ctx && brew install lean-ctx    # macOS / Linux
npm install -g lean-ctx-bin                          # Node.js
cargo install lean-ctx                               # Rust
pi install npm:pi-lean-ctx                           # Pi Coding Agent

# 2) Setup (shell + auto-detected AI tools)
lean-ctx setup

# 3) Verify
lean-ctx doctor

# 4) See the payoff
lean-ctx gain --live
lean-ctx wrapped --week

After setup, restart your shell and your editor/AI tool once so the MCP + hooks are active.

Troubleshooting / Safety
  • Disable immediately (current shell): lean-ctx-off
  • Run a single command uncompressed: lean-ctx -c --raw "git status"
  • Update: lean-ctx update
  • Diagnose (shareable): lean-ctx doctor --json

Supported IDEs & AI tools

lean-ctx is a standard MCP server, so it works with any MCP-compatible client. Three integration modes are auto-selected per agent:

ModeHow it worksBest for
CLI-RedirectAgent calls lean-ctx directly via shell — zero MCP schema overheadAgents with shell access
HybridMCP for cached reads (13 tokens), CLI for shell + searchMixed environments
Full MCPAll 56 tools via MCP protocolProtocol-only agents

Agent compatibility matrix

AgentCLIHybridMCPSetup
Cursorlean-ctx init --agent cursor
Codex CLIlean-ctx init --agent codex
Gemini CLIlean-ctx init --agent gemini
Claude Codelean-ctx init --agent claude
CRUSHlean-ctx init --agent crush
Hermeslean-ctx init --agent hermes
OpenCodelean-ctx init --agent opencode
Pilean-ctx init --agent pi
Qoderlean-ctx init --agent qoder
Windsurflean-ctx init --agent windsurf
GitHub Copilotlean-ctx init --agent copilot
Amplean-ctx init --agent amp
Clinelean-ctx init --agent cline
Roo Codelean-ctx init --agent roo
Kirolean-ctx init --agent kiro
Antigravitylean-ctx init --agent antigravity
Amazon Qlean-ctx init --agent amazonq
Qwenlean-ctx init --agent qwen
Traelean-ctx init --agent trae
Verdentlean-ctx init --agent verdent
JetBrains IDEslean-ctx init --agent jetbrains
QoderWorklean-ctx init --agent qoderwork
VS Codelean-ctx init --agent vscode
Zedlean-ctx init --agent zed
Neovimlean-ctx init --agent neovim
Emacslean-ctx init --agent emacs
Sublime Textlean-ctx init --agent sublime

Any MCP-compatible client works out of the box — the table above shows agents with first-class auto-setup.

When to use (and when not to)

Great fit if you…

  • use AI coding tools daily and your sessions are shell-heavy (git/tests/builds)
  • work in medium/large repos (50+ files / monorepos)
  • want a local-first layer with no telemetry by default

Skip it if you…

  • mostly work in tiny repos and rarely call the shell from your AI tool
  • always need raw/unfiltered logs (you can still use --raw, but ROI is lower)

Demo

Try these in any repo:

lean-ctx read rust/src/server/mod.rs -m map
lean-ctx -c "git log -n 5 --oneline"
lean-ctx gain --live
lean-ctx benchmark report .
  • The repo ships the exact tapes used to render the GIFs in demo/
  • Regenerate locally:
vhs demo/leanctx.tape
vhs demo/gain.tape
vhs demo/benchmark.tape

Benchmarks

lean-ctx benchmark report .

Docs

Privacy & security

  • No telemetry by default
  • Optional anonymous stats sharing (opt-in during setup)
  • Disableable update check (config update_check_disabled = true or LEAN_CTX_NO_UPDATE_CHECK=1)
  • Runs locally; your code never leaves your machine unless you explicitly enable cloud sync

See SECURITY.md.

Uninstall

lean-ctx-off       # disable immediately (current shell session)
lean-ctx uninstall # remove hooks + editor configs + data dir

# Remove the binary (pick your install method)
brew uninstall lean-ctx
npm uninstall -g lean-ctx-bin
cargo uninstall lean-ctx
pi uninstall npm:pi-lean-ctx                        # Pi Coding Agent

Contributing

Start with CONTRIBUTING.md. Easy first PR: propose a new CLI compression pattern via the issue template.

License

Apache License 2.0 — see LICENSE.

Portions of this software were originally released under the MIT License. See LICENSE-MIT and NOTICE.