README.md

May 10, 2026 · View on GitHub

caveman

why use many token when few do trick

Stars Last Commit License

Before/AfterInstallWhat You GetBenchmarksFull install guide


A Claude Code skill/plugin (also Codex, Gemini, Cursor, Windsurf, Cline, Copilot, 30+ more) that makes agent talk like caveman — cuts ~75% of output tokens, keeps full technical accuracy. Brain still big. Mouth small.

Before / After

🗣️ Normal Claude (69 tokens)

"The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."

🪨 Caveman Claude (19 tokens)

"New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo."

🗣️ Normal Claude

"Sure! I'd be happy to help you with that. The issue you're experiencing is most likely caused by your authentication middleware not properly validating the token expiry. Let me take a look and suggest a fix."

🪨 Caveman Claude

"Bug in auth middleware. Token expiry check use < not <=. Fix:"

Same fix. 75% less word. Brain still big.

┌─────────────────────────────────────┐
│  TOKENS SAVED          ████████ 75% │
│  TECHNICAL ACCURACY    ████████ 100%│
│  SPEED INCREASE        ████████ ~3x │
│  VIBES                 ████████ OOG │
└─────────────────────────────────────┘

Pick your level of grunt — lite (drop filler), full (default caveman), ultra (telegraphic), or wenyan (classical Chinese, even shorter). One command switch. Cost go down forever.

Install

One line. Find every agent. Install for each.

# macOS / Linux / WSL / Git Bash
curl -fsSL https://raw.githubusercontent.com/JuliusBrussee/caveman/main/install.sh | bash

# Windows (PowerShell 5.1+)
irm https://raw.githubusercontent.com/JuliusBrussee/caveman/main/install.ps1 | iex

~30 seconds. Needs Node ≥18. Skip agent you no have. Safe to re-run.

Trigger: type /caveman or say "talk like caveman". Stop with "normal mode".

One agent only, manual command, or any of 30+ other agents → INSTALL.md. Install break? Open agent, say "Read CLAUDE.md and INSTALL.md, install caveman for me." Agent fix own brain.

What You Get

SkillWhat
/caveman [lite|full|ultra|wenyan]Compress every reply. Levels stick until session end.
/caveman-commitConventional Commit messages, ≤50 char subject. Why over what.
/caveman-reviewOne-line PR comments: L42: 🔴 bug: user null. Add guard.
/caveman-statsReal session token usage + lifetime savings + USD. Tweetable line via --share.
/caveman-compress <file>Rewrite memory file (e.g. CLAUDE.md) into caveman-speak. Cuts ~46% input tokens every session. Code/URLs/paths byte-preserved.
caveman-shrinkMCP middleware. Wraps any MCP server, compresses tool descriptions. npm.
cavecrew-*Caveman subagents (investigator/builder/reviewer). ~60% fewer tokens than vanilla, main context lasts longer.

Statusline badge — Claude Code shows [CAVEMAN] ⛏ 12.4k (lifetime tokens saved). Updates every /caveman-stats run. Set CAVEMAN_STATUSLINE_SAVINGS=0 to silence.

Auto-activate every session: Claude Code, Codex, Gemini (built-in). Cursor / Windsurf / Cline / Copilot get always-on rule files via --with-init. Other agents trigger with /caveman per session. Full feature matrix in INSTALL.md.

Benchmarks

Real token counts from the Claude API. Average 65% output reduction across 10 prompts (range 22-87%).

TaskNormalCavemanSaved
Explain React re-render bug118015987%
Fix auth middleware token expiry70412183%
Set up PostgreSQL connection pool234738084%
Explain git rebase vs merge70229258%
Refactor callback to async/await38730122%
Architecture: microservices vs monolith44631030%
Review PR for security issues67839841%
Docker multi-stage build104229072%
Debug PostgreSQL race condition120023281%
Implement React error boundary345445687%
Average121429465%

Raw data and reproduction script: benchmarks/. Three-arm eval harness (baseline / terse / skill) lives in evals/ — caveman compared against Answer concisely. not against verbose default, so the delta is honest.

caveman-compress receipts (real memory files):

FileOriginalCompressedSaved
claude-md-preferences.md70628559.6%
project-notes.md114553553.3%
claude-md-project.md112263643.3%
todo-list.md62738838.1%
mixed-with-code.md88856036.9%
Average89848146%

Important

Caveman only affects output tokens — thinking/reasoning tokens untouched. Caveman no make brain smaller. Caveman make mouth smaller. Biggest win is readability and speed, cost savings a bonus.

A March 2026 paper "Brevity Constraints Reverse Performance Hierarchies in Language Models" found that constraining large models to brief responses improved accuracy by 26 points on certain benchmarks. Verbose not always better. Sometimes less word = more correct.

How It Work

  1. Install drop skill file in agent.
  2. Skill tell agent: drop filler, keep substance, use fragments.
  3. For Claude Code, hook also write tiny flag file each session — agent see flag, talk caveman from message one. No need say /caveman.
  4. Stats command read Claude Code session log, count tokens saved, write number to statusline.
  5. Caveman-compress sub-skill rewrite memory files (CLAUDE.md, project notes) so each session start with smaller context. Save tokens forever, not just one reply.

Maintainer detail (hook architecture, file ownership, CI sync) live in CLAUDE.md.

Lobster, Meet Rock 🦞🪨

OpenClaw the self-host gateway. One box, many agent inside (Claude Code, Codex, Pi, OpenCode), wired to your Slack / Discord / iMessage / Telegram / whatever. Tagline: "The lobster way." Lobster strong. Lobster smart. Lobster also talk a lot.

Caveman teach lobster brevity — same canonical installer, scoped to one agent:

# macOS / Linux / WSL
curl -fsSL https://raw.githubusercontent.com/JuliusBrussee/caveman/main/install.sh | bash -s -- --only openclaw

# Windows (PowerShell): no Node? install Node ≥18 first, then
npx -y github:JuliusBrussee/caveman -- --only openclaw

Two thing happen, no more:

  1. Skill drop at ~/.openclaw/workspace/skills/caveman/SKILL.md — spec-correct frontmatter (version, always: true), discoverable by openclaw skills list. Skill not auto-inject (OpenClaw load skill on demand) — that why we also do step 2.
  2. SOUL.md nudge. Tiny marker-fenced block appended to ~/.openclaw/workspace/SOUL.md. OpenClaw inject SOUL.md into every turn under "Project Context" (12K-per-file, 60K total — block well under). Lobster terse from message one. No /caveman per session. No nag.
~/.openclaw/workspace/
├── skills/caveman/SKILL.md   ← full ruleset, on-demand load
└── SOUL.md                    ← <!-- caveman-begin --> ... <!-- caveman-end -->
                                  ↑ auto-inject every turn

Custom workspace path? OPENCLAW_WORKSPACE=/your/path before the command. Uninstall: same one-liner with --uninstall — skill folder gone, SOUL.md block ripped out cleanly, your other workspace content stay untouched. Idempotent re-runs (frontmatter not double-prepended, marker block not duplicated).

Lobster claw still sharp. Lobster mouth now small. Brain still big.

Caveman Ecosystem

Three tools. One philosophy: agent do more with less.

RepoWhat
caveman (you here)Output compression — why use many token when few do trick
cavememCross-agent memory — why agent forget when agent can remember
cavekitSpec-driven build loop — why agent guess when agent can know

Compose: cavekit drive build, caveman compress what agent say, cavemem compress what agent remember. One rock. Two rock. Three rock. That it.

  • INSTALL.md — full install matrix, all flags, per-agent detail
  • CONTRIBUTING.md — how to send patch
  • CLAUDE.md — maintainer guide (file ownership, hook architecture, CI)
  • docs/ — extra guides (Windows install, etc.)
  • Issues — bug, feature, weird behavior

Star This Repo

Caveman save you token, save you money. Star cost zero. Fair trade. ⭐

Star History Chart

Also by Julius Brussee

  • Revu — local-first macOS study app with FSRS spaced repetition. revu.cards

License

MIT — free like mass mammoth on open plain.