A-Evolve π§¬: The Universal Infrastructure for Self-Improving Agents
May 5, 2026 Β· View on GitHub
The PyTorch for Agentic AI. A-Evolve is an open-source infrastructure that evolves any agent, across any domain, using any evolution algorithm β with zero human intervention.
Quick Start | News | Benchmark Highlights | Architecture & Design | Contribution

What Does A-Evolve Do?
You provide a Base Agent. A-Evolve returns a SOTA Agent. 3 lines of code. 0 hours of manual harness engineering. One infra, any domain, any evolution algorithm.
import agent_evolve as ae
evolver = ae.Evolver(agent="./my_agent", benchmark="swe-verified")
results = evolver.run(cycles=10)
Benchmark Highlights
By applying our open-source reference evolution algorithms to a base Claude Opus-4.6 model with zero manual harness engineering, A-Evolve pushed agents into top-tier performance across four diverse benchmarks:
π’ MCP-Atlasπ₯ #1 Baseline β 79.4% (+3.4pp) |
π΅ SWE-bench Verified~#5 Baseline β 76.8% (+2.6pp) |
π£ Terminal-Bench 2.0~#7 Baseline β 76.5% (+13.0pp) |
π‘ SkillsBench#2 Baseline β 34.9% (+15.2pp) |
π’ ARC-AGIπ₯ #2 Community LeaderBoard Baseline β 12.3% (+2.2pp) |
π΅ OSWorld-- Baseline β 69.6% (+3.9pp) |
π£ CL BenchTo Be Announced To Be Announced |
π‘ WebArena-infinityTo Be Announced To Be Announced |

All results achieved with a single Claude Opus-4.6 base model, evolved using A-Evolve's sample algorithms. 0 hours of human harness engineering. Data checked March 2026.
News
- 05/04 New Benchmark Results, A-Evolve added results on ARC-AGI-3, evolving a multi-agent system to be more powerful on solving difficult tasks like ARC-AGI-3. Improving performance from 10% to 12%.
- 04/20 New Algorithm Drop, A-Evolve added new evolutionary algorithm GEPA, submitted by the GEPA team.
- 04/10 Integration, A-Evolve is officially integrated into Orch-Research Skills Library, along with others including AutoResearch, OpenRLHF, DeepSpeed, SGLang
- 04/07 New Agent Drop, We added recently leaked public ClawCode (Claude Code), took the evolution harness + skills we learned on Terminal-Bench 2.0 (TB2) and directly transplanted them onto the ClawCode. Result on TB2: baseline 67.8% β 72.9% (+5.1pp uplift)
- 04/03 New Algorithm Drop, A-Evolve added new evolutionary algorithm Meta-Harness
- 03/30 Integration, A-Evolve is officially integrated into AutoResearchClaw
- 03/25 π Open-source A-Evolve, the universal infrastructure for developing and testing evolving algorithms.
- 03/25 π Open-source 4 evolving algorithms developed with A-Evolve, achieving SOTA (#1, ~#5, ~#7, #2) on MCP-Atlas, SWE-bench Verified, Terminal-Bench 2.0, and SkillsBench.
- 02/17 π Release the official implementation of Position: Agentic Evolution is the Path to Evolving LLMs (arXiv 2602.00359).
We are evolving fast! Support our research by leaving a β.
What Does an Evolved Agent Look Like?
A-Evolve mutates real files in the workspace. Here's a before/after from our MCP-Atlas evolution:
| Before (Seed Workspace) | After (Evolved β 79.4% on MCP-Atlas) |
|---|---|
|
|
5 targeted skills outperformed 10 generic ones. Every mutation is git-tagged (evo-1, evo-2, β¦) for full reproducibility.
Quick Start
1. Install
# PyPI (recommended)
pip install a-evolve # core
pip install a-evolve[anthropic] # Claude support
pip install a-evolve[mcp] # MCP-Atlas benchmark
pip install a-evolve[swe] # SWE-bench benchmark
pip install a-evolve[all] # everything
# From source (for development)
git clone https://github.com/A-EVO-Lab/a-evolve.git && cd a-evolve
pip install -e ".[all,dev]"
2. Evolve β 3 Lines of Code
import agent_evolve as ae
evolver = ae.Evolver(
agent="swe-verified", # built-in seed workspace (or path to yours)
benchmark="swe-verified", # built-in benchmark adapter
)
results = evolver.run(cycles=10)
print(f"Final score: {results.final_score:.3f}")
print(f"Converged: {results.converged}")
A-Evolve ships with built-in seed workspaces (swe, mcp, terminal, skillbench) and benchmark adapters (swe-verified, mcp-atlas, terminal-bench 2.0, skill-bench). Point agent= at any of them β or at your own workspace directory.
3. Bring Your Own Agent (BYOA)
To make any agent evolvable, implement one method β solve():
from agent_evolve.protocol.base_agent import BaseAgent
from agent_evolve.types import Task, Trajectory
class MyAgent(BaseAgent):
def solve(self, task: Task) -> Trajectory:
return Trajectory(task_id=task.id, output="result")
Then evolve it:
evolver = ae.Evolver(agent=MyAgent("./my_workspace"), benchmark="mcp-atlas")
results = evolver.run(cycles=10)
Your agent's evolvable state (prompts, skills, memory) lives as a standard directory β the Agent Workspace. A-Evolve mutates these files; your agent reloads. See Architecture & Design for the full picture.
For benchmark-specific walkthroughs, see SWE-bench Demo Guide, MCP-Atlas Demo Guide, and SkillBench Setup Guide.
Architecture & Design

The Agent Workspace: A File System Contract
A-Evolve's core insight: all evolvable agent state lives on the file system as a standard directory structure. This lets the evolution engine mutate any agent via LLM-driven file operations β without knowing the agent's internals.
my_agent/
βββ manifest.yaml # identity, entrypoint, evolvable layers
βββ prompts/system.md # system prompt
βββ skills/ # SKILL.md files (dynamic skill library)
βββ tools/ # tool configurations
βββ memory/ # episodic + semantic memory (JSONL)
The evolution engine reads these files, analyzes performance logs, and writes mutations back. The agent reloads. That's the entire contract.
The Evolution Loop
Every cycle follows five phases:
βββββββββββ βββββββββββ βββββββββββ ββββββββ ββββββββββ
β Solve βββββΆβ Observe βββββΆβ Evolve βββββΆβ Gate βββββΆβ Reload β
βββββββββββ βββββββββββ βββββββββββ ββββββββ ββββββββββ
- Solve β Agent processes a batch of tasks (black-box execution).
- Observe β Collect trajectories + benchmark feedback into structured logs.
- Evolve β Evolution engine analyzes observations and mutates workspace files (prompts, skills, memory).
- Gate β Validate mutations on holdout tasks. Regressed mutations are rolled back via git.
- Reload β Agent reloads from the (possibly rolled-back) workspace.
The loop converges when EGL (Evolutionary Generality Loss) stabilizes or max_cycles is reached. Every accepted mutation is git-tagged (evo-1, evo-2, β¦), providing a full audit trail.
Built-in Adapters
A-Evolve ships with ready-to-use benchmark adapters and seed workspaces:
| Adapter | Domain | Seed Workspace | Best Result |
|---|---|---|---|
swe-verified | Real-world GitHub issues (Python repos) | seed_workspaces/swe/ | 76.8% (~#5) |
mcp-atlas | Tool-calling via MCP (16+ servers) | seed_workspaces/mcp/ | 79.4% (π₯ #1) |
terminal-bench | Terminal/CLI ops in Docker | seed_workspaces/terminal/ | 76.5% (~#7) |
skill-bench | Agentic skill discovery | seed_workspaces/skillbench/ | 34.9% (~#2) |
cl-bench | Continual-learning rubric evaluation | β | 38.0% |
Pluggability: Bring Your Own Everything
A-Evolve is a framework, not a standalone agent. Every axis is pluggable:
| Axis | Interface | You Provide | Built-in Examples |
|---|---|---|---|
| Agent (BYOA) | BaseAgent.solve() | Any agent architecture β ReAct, Plan-and-Solve, custom | SweAgent, McpAgent |
| Benchmark (BYOE) | BenchmarkAdapter.get_tasks() / .evaluate() | Any domain with task + evaluation signal | SWE-bench, MCP-Atlas, Terminal-Bench 2.0, SkillsBench, CL-bench |
| Algorithm (BYO-Algo) | EvolutionEngine.step() | Any evolution strategy | AEvolveEngine (LLM-driven mutation) |
| LLM Provider | LLMProvider.complete() | Any model API | Anthropic, OpenAI, AWS Bedrock |
Built-in Evolution Algorithms
A-Evolve ships with 4 reference evolution algorithms, each targeting different domains and strategies:
| Algorithm | Strategy | Best For | Docs |
|---|---|---|---|
adaptive_evolve | Per-claim feedback analysis + meta-learning | MCP-Atlas (π₯ #1, 79.4%) | Guide |
adaptive_skill | LLM-driven workspace mutation with bash tool access | Terminal-Bench 2.0 (~#7, 76.5%) | Guide |
skillforge | LLM-driven workspace mutation with EGL gating | SkillsBench (#2, 34.9%) | Guide |
guided_synth | Memory-first evolution + LLM-guided intervention synthesis | General-purpose, SWE-bench (~#5, 76.8%) | Guide |
Plugging in a custom evolution algorithm
Each algorithm lives in its own directory under algorithms/. Implement a single method:
from agent_evolve.engine.base import EvolutionEngine
from agent_evolve.types import StepResult
class MyEvolutionEngine(EvolutionEngine):
def step(self, workspace, observations, history, trial) -> StepResult:
# Analyze observations, mutate workspace files, optionally run trial tasks
...
return StepResult(accepted=True, score=new_score)
Then pass it to the Evolver:
evolver = ae.Evolver(
agent="swe-verified",
benchmark="swe-verified",
engine=MyEvolutionEngine(config),
)
The engine has full access to shared primitives β TrialRunner (on-demand validation), EvolutionHistory (observation + version queries), and VersionControl (git-based rollback) β but is never forced to use them. Minimal contract, maximum freedom.
Community & Contributing
A-Evolve is built for the research community. We welcome contributions across every axis of the framework.
For Algorithm Researchers
If you work in LLM self-optimization, reinforcement learning, or agent architectures β implement the EvolutionEngine interface and your algorithm instantly gains access to:
- Diverse environments (SWE-bench, MCP-Atlas, Terminal-Bench 2.0, SkillsBench, and more).
- Standardized agent workspace representations.
- Rigorous evaluation, gating, and logging infrastructure.
Drop your algorithm into agent_evolve/algorithms/your_algo/ and open a PR.
For Benchmark Authors
Implement BenchmarkAdapter to plug any new evaluation domain into A-Evolve. The interface is two methods: get_tasks() and evaluate().
Get Involved
- β Star this repo to support our research β we are evolving fast.
- π Open an issue to report bugs or request features.
- π Submit a PR β new evolution algorithms, benchmark adapters, agent implementations, and documentation improvements are all welcome.
- π¬ Join our Discord to discuss research directions, share results, and collaborate.
Citation
If you use A-Evolve in your research, please cite our position paper:
@article{lin2026position,
title={Position: Agentic Evolution is the Path to Evolving LLMs},
author={Lin, Minhua and Lu, Hanqing and Shi, Zhan and He, Bing and Mao, Rui and Zhang, Zhiwei and Wu, Zongyu and Tang, Xianfeng and Liu, Hui and Dai, Zhenwei and others},
journal={arXiv preprint arXiv:2602.00359},
year={2026}
}