Agentic Context Engine (ACE)

April 20, 2026 · View on GitHub

Kayba - Make your agents self-improve from experience

Agentic Context Engine (ACE)

GitHub stars Kayba Website Discord Twitter Follow Documentation

Tip

Try our hosted solution for free at kayba.ai: automated agent self-improvement from your terminal. CLI + dashboard that analyzes traces, surfaces failures, and ships improvements directly from Claude Code, Codex, and more.

Kayba Pro


AI agents don't learn from experience. They repeat the same mistakes every session, forget what worked, and ignore what failed. ACE adds a persistent learning loop that makes them better over time.

ACE learns from mistakes in real time

The agent claims a seahorse emoji exists. ACE reflects on the error, and on the next attempt, the agent responds correctly — without human intervention.


Proven Results

MetricResultContext
2x consistencyDoubles pass^4 on Tau2 airline benchmark15 learned strategies, no reward signals
49% token reductionBrowser automation costs cut nearly in half10-run learning curve
$1.50 learning costClaude Code translated 14k lines to TypeScriptZero build errors, all tests passing

Quick Start

uv add ace-framework

Option A — Interactive setup (recommended):

ace setup            # Walks you through model selection, API keys, and connection validation

Option B — Manual configuration:

export OPENAI_API_KEY="your-key"    # or ANTHROPIC_API_KEY, or any of 100+ supported providers

Then use it:

from ace import ACELiteLLM

agent = ACELiteLLM(model="gpt-4o-mini")

# First attempt — the agent may hallucinate
answer = agent.ask("Is there a seahorse emoji?")

# Feed a correction — ACE extracts a strategy and updates the Skillbook
agent.learn_from_feedback("There is no seahorse emoji in Unicode.")

# Subsequent calls benefit from the learned strategy
answer = agent.ask("Is there a seahorse emoji?")

# Inspect what the agent has learned
print(agent.get_strategies())

No fine-tuning, no training data, no vector database.

-> Quick Start Guide | -> Setup Guide | -> Hosted API: Where Do Traces Come From?


How It Works

ACE maintains a Skillbook — a persistent collection of strategies that evolves with every task. Three specialized roles manage the learning loop:

RoleResponsibility
AgentExecutes tasks, enhanced with Skillbook strategies
ReflectorAnalyzes execution traces to extract what worked and what failed
SkillManagerCurates the Skillbook — adds, refines, and removes strategies

The Recursive Reflector is the key innovation: instead of summarizing traces in a single pass, it writes and executes Python code in a sandboxed environment to programmatically search for patterns, isolate errors, and iterate until it finds actionable insights.

flowchart LR
    Skillbook[(Skillbook)]
    Start([Task]) --> Agent[Agent]
    Agent <--> Environment[Environment]
    Environment -- Trace --> Reflector[Reflector]
    Reflector --> SkillManager[SkillManager]
    SkillManager -- Updates --> Skillbook
    Skillbook -. Strategies .-> Agent

All roles are backed by PydanticAI agents with structured output validation. PydanticAI routes to 100+ LLM providers through its LiteLLM integration, with native support for OpenAI, Anthropic, Google, Bedrock, Groq, and more.

Based on the ACE paper (Stanford & SambaNova) and Dynamic Cheatsheet.


Runners

RunnerClassDescription
LiteLLMACELiteLLMBatteries-included agent with .ask(), .learn(), .save() — accepts any LiteLLM model string
CoreACEFull learning loop with batch epochs and evaluation
Trace AnalyserTraceAnalyserLearn from pre-recorded traces without re-running tasks
browser-useBrowserUseBrowser automation that improves with each run
LangChainLangChainWrap any LangChain chain or agent with learning
Claude CodeClaudeCodeClaude Code CLI tasks with learning
uv add 'ace-framework[browser-use]'    # Browser automation
uv add 'ace-framework[langchain]'      # LangChain
uv add 'ace-framework[logfire]'        # Observability (auto-instruments PydanticAI)
uv add 'ace-framework[mcp]'            # MCP server for IDE integration
uv add 'ace-framework[deduplication]'  # Embedding-based skill deduplication

Have existing agent logs? Extract strategies from them directly:

from ace import ACELiteLLM

agent = ACELiteLLM(model="gpt-4o-mini")
agent.learn_from_traces(your_existing_traces)
print(agent.get_strategies())

-> Examples


Benchmarks

Tau2 — Multi-Step Agentic Tasks

tau2-bench by Sierra Research: airline domain tasks requiring tool use and policy adherence. Claude Haiku 4.5 agent, strategies learned on the train split with no reward signals, evaluated on the held-out test split.

Tau2 Benchmark — ACE doubles consistency at pass^4

pass^k = probability all k independent attempts succeed. ACE doubles consistency at pass^4 with 15 learned strategies.

Claude Code — Autonomous Translation

ACE + Claude Code translated this library from Python to TypeScript with zero supervision:

MetricResult
Duration~4 hours
Commits119
Lines written~14,000
Build errors0
TestsAll passing
Learning cost~$1.50

Pipeline Architecture

ACE is built on a composable pipeline engine. Each step declares what it requires and what it produces:

AgentStep -> EvaluateStep -> ReflectStep -> UpdateStep -> DeduplicateStep

Use learning_tail() for the standard learning sequence, or compose custom pipelines:

from ace import Pipeline, AgentStep, EvaluateStep, learning_tail

steps = [AgentStep(agent, skillbook), EvaluateStep(env)] + learning_tail(reflector, skill_manager, skillbook)
pipeline = Pipeline(steps)

The pipeline engine (pipeline/) is framework-agnostic with requires/provides contracts, immutable context, and error isolation. See Pipeline Design and Architecture.


CLI

CommandDescription
ace setupInteractive setup — model selection, API keys, connection validation
ace models <query>Search available models with pricing
ace validate <model>Test a model connection
ace configShow current configuration
kaybaCloud CLI — upload traces, fetch insights, manage prompts
ace-mcpMCP server for IDE integration

Documentation


Contributing

Contributions are welcome. See Contributing Guidelines.


Built by Kayba and the open-source community.