A-Evolve 🧬: The Universal Infrastructure for Self-Improving Agents

May 5, 2026 Β· View on GitHub

GitHub stars License: MIT Python 3.11+ arXiv

The PyTorch for Agentic AI. A-Evolve is an open-source infrastructure that evolves any agent, across any domain, using any evolution algorithm β€” with zero human intervention.

Quick Start | News | Benchmark Highlights | Architecture & Design | Contribution

A-Evolve Teaser


What Does A-Evolve Do?

You provide a Base Agent. A-Evolve returns a SOTA Agent. 3 lines of code. 0 hours of manual harness engineering. One infra, any domain, any evolution algorithm.

import agent_evolve as ae

evolver = ae.Evolver(agent="./my_agent", benchmark="swe-verified")
results = evolver.run(cycles=10)

Benchmark Highlights

By applying our open-source reference evolution algorithms to a base Claude Opus-4.6 model with zero manual harness engineering, A-Evolve pushed agents into top-tier performance across four diverse benchmarks:

🟒 MCP-Atlas



πŸ₯‡ #1
Baseline β†’ 79.4% (+3.4pp)

πŸ”΅ SWE-bench Verified



~#5
Baseline β†’ 76.8% (+2.6pp)

🟣 Terminal-Bench 2.0



~#7
Baseline β†’ 76.5% (+13.0pp)

🟑 SkillsBench



#2
Baseline β†’ 34.9% (+15.2pp)

🟒 ARC-AGI



πŸ₯‡ #2 Community LeaderBoard
Baseline β†’ 12.3% (+2.2pp)

πŸ”΅ OSWorld



--
Baseline β†’ 69.6% (+3.9pp)

🟣 CL Bench



To Be Announced
To Be Announced

🟑 WebArena-infinity



To Be Announced
To Be Announced

A-Evolve Benchmarks

All results achieved with a single Claude Opus-4.6 base model, evolved using A-Evolve's sample algorithms. 0 hours of human harness engineering. Data checked March 2026.

News

  • 05/04 New Benchmark Results, A-Evolve added results on ARC-AGI-3, evolving a multi-agent system to be more powerful on solving difficult tasks like ARC-AGI-3. Improving performance from 10% to 12%.
  • 04/20 New Algorithm Drop, A-Evolve added new evolutionary algorithm GEPA, submitted by the GEPA team.
  • 04/10 Integration, A-Evolve is officially integrated into Orch-Research Skills Library, along with others including AutoResearch, OpenRLHF, DeepSpeed, SGLang
  • 04/07 New Agent Drop, We added recently leaked public ClawCode (Claude Code), took the evolution harness + skills we learned on Terminal-Bench 2.0 (TB2) and directly transplanted them onto the ClawCode. Result on TB2: baseline 67.8% β†’ 72.9% (+5.1pp uplift)
  • 04/03 New Algorithm Drop, A-Evolve added new evolutionary algorithm Meta-Harness
  • 03/30 Integration, A-Evolve is officially integrated into AutoResearchClaw
  • 03/25 πŸš€ Open-source A-Evolve, the universal infrastructure for developing and testing evolving algorithms.
  • 03/25 πŸ“Š Open-source 4 evolving algorithms developed with A-Evolve, achieving SOTA (#1, ~#5, ~#7, #2) on MCP-Atlas, SWE-bench Verified, Terminal-Bench 2.0, and SkillsBench.
  • 02/17 πŸ“„ Release the official implementation of Position: Agentic Evolution is the Path to Evolving LLMs (arXiv 2602.00359).

We are evolving fast! Support our research by leaving a ⭐.

What Does an Evolved Agent Look Like?

A-Evolve mutates real files in the workspace. Here's a before/after from our MCP-Atlas evolution:

Before (Seed Workspace) After (Evolved β€” 79.4% on MCP-Atlas)
mcp_agent/
β”œβ”€β”€ manifest.yaml
β”œβ”€β”€ prompts/system.md      ← 20 lines, generic
β”œβ”€β”€ skills/                ← empty
└── memory/                ← empty
mcp_agent/
β”œβ”€β”€ manifest.yaml
β”œβ”€β”€ prompts/system.md      ← 20 lines, unchanged
β”œβ”€β”€ skills/
β”‚   β”œβ”€β”€ entity-verification/SKILL.md   ← NEW
β”‚   β”œβ”€β”€ search-iteration/SKILL.md      ← NEW
β”‚   β”œβ”€β”€ multi-requirement/SKILL.md     ← NEW
β”‚   β”œβ”€β”€ code-execution/SKILL.md        ← NEW
β”‚   └── conditional-handler/SKILL.md   ← NEW
└── memory/
    └── episodic.jsonl     ← 6 entries

5 targeted skills outperformed 10 generic ones. Every mutation is git-tagged (evo-1, evo-2, …) for full reproducibility.


Quick Start

1. Install

# PyPI (recommended)
pip install a-evolve              # core
pip install a-evolve[anthropic]   # Claude support
pip install a-evolve[mcp]         # MCP-Atlas benchmark
pip install a-evolve[swe]         # SWE-bench benchmark
pip install a-evolve[all]         # everything

# From source (for development)
git clone https://github.com/A-EVO-Lab/a-evolve.git && cd a-evolve
pip install -e ".[all,dev]"

2. Evolve β€” 3 Lines of Code

import agent_evolve as ae

evolver = ae.Evolver(
    agent="swe-verified",           # built-in seed workspace (or path to yours)
    benchmark="swe-verified",       # built-in benchmark adapter
)
results = evolver.run(cycles=10)

print(f"Final score: {results.final_score:.3f}")
print(f"Converged:   {results.converged}")

A-Evolve ships with built-in seed workspaces (swe, mcp, terminal, skillbench) and benchmark adapters (swe-verified, mcp-atlas, terminal-bench 2.0, skill-bench). Point agent= at any of them β€” or at your own workspace directory.

3. Bring Your Own Agent (BYOA)

To make any agent evolvable, implement one method β€” solve():

from agent_evolve.protocol.base_agent import BaseAgent
from agent_evolve.types import Task, Trajectory

class MyAgent(BaseAgent):
    def solve(self, task: Task) -> Trajectory:
        return Trajectory(task_id=task.id, output="result")

Then evolve it:

evolver = ae.Evolver(agent=MyAgent("./my_workspace"), benchmark="mcp-atlas")
results = evolver.run(cycles=10)

Your agent's evolvable state (prompts, skills, memory) lives as a standard directory β€” the Agent Workspace. A-Evolve mutates these files; your agent reloads. See Architecture & Design for the full picture.

For benchmark-specific walkthroughs, see SWE-bench Demo Guide, MCP-Atlas Demo Guide, and SkillBench Setup Guide.


Architecture & Design

A-Evolve Framework

The Agent Workspace: A File System Contract

A-Evolve's core insight: all evolvable agent state lives on the file system as a standard directory structure. This lets the evolution engine mutate any agent via LLM-driven file operations β€” without knowing the agent's internals.

my_agent/
β”œβ”€β”€ manifest.yaml          # identity, entrypoint, evolvable layers
β”œβ”€β”€ prompts/system.md      # system prompt
β”œβ”€β”€ skills/                # SKILL.md files (dynamic skill library)
β”œβ”€β”€ tools/                 # tool configurations
└── memory/                # episodic + semantic memory (JSONL)

The evolution engine reads these files, analyzes performance logs, and writes mutations back. The agent reloads. That's the entire contract.

The Evolution Loop

Every cycle follows five phases:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Solve  │───▢│ Observe │───▢│ Evolve  │───▢│ Gate │───▢│ Reload β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. Solve β€” Agent processes a batch of tasks (black-box execution).
  2. Observe β€” Collect trajectories + benchmark feedback into structured logs.
  3. Evolve β€” Evolution engine analyzes observations and mutates workspace files (prompts, skills, memory).
  4. Gate β€” Validate mutations on holdout tasks. Regressed mutations are rolled back via git.
  5. Reload β€” Agent reloads from the (possibly rolled-back) workspace.

The loop converges when EGL (Evolutionary Generality Loss) stabilizes or max_cycles is reached. Every accepted mutation is git-tagged (evo-1, evo-2, …), providing a full audit trail.

Built-in Adapters

A-Evolve ships with ready-to-use benchmark adapters and seed workspaces:

AdapterDomainSeed WorkspaceBest Result
swe-verifiedReal-world GitHub issues (Python repos)seed_workspaces/swe/76.8% (~#5)
mcp-atlasTool-calling via MCP (16+ servers)seed_workspaces/mcp/79.4% (πŸ₯‡ #1)
terminal-benchTerminal/CLI ops in Dockerseed_workspaces/terminal/76.5% (~#7)
skill-benchAgentic skill discoveryseed_workspaces/skillbench/34.9% (~#2)
cl-benchContinual-learning rubric evaluationβ€”38.0%

Pluggability: Bring Your Own Everything

A-Evolve is a framework, not a standalone agent. Every axis is pluggable:

AxisInterfaceYou ProvideBuilt-in Examples
Agent (BYOA)BaseAgent.solve()Any agent architecture β€” ReAct, Plan-and-Solve, customSweAgent, McpAgent
Benchmark (BYOE)BenchmarkAdapter.get_tasks() / .evaluate()Any domain with task + evaluation signalSWE-bench, MCP-Atlas, Terminal-Bench 2.0, SkillsBench, CL-bench
Algorithm (BYO-Algo)EvolutionEngine.step()Any evolution strategyAEvolveEngine (LLM-driven mutation)
LLM ProviderLLMProvider.complete()Any model APIAnthropic, OpenAI, AWS Bedrock

Built-in Evolution Algorithms

A-Evolve ships with 4 reference evolution algorithms, each targeting different domains and strategies:

AlgorithmStrategyBest ForDocs
adaptive_evolvePer-claim feedback analysis + meta-learningMCP-Atlas (πŸ₯‡ #1, 79.4%)Guide
adaptive_skillLLM-driven workspace mutation with bash tool accessTerminal-Bench 2.0 (~#7, 76.5%)Guide
skillforgeLLM-driven workspace mutation with EGL gatingSkillsBench (#2, 34.9%)Guide
guided_synthMemory-first evolution + LLM-guided intervention synthesisGeneral-purpose, SWE-bench (~#5, 76.8%)Guide

Plugging in a custom evolution algorithm

Each algorithm lives in its own directory under algorithms/. Implement a single method:

from agent_evolve.engine.base import EvolutionEngine
from agent_evolve.types import StepResult

class MyEvolutionEngine(EvolutionEngine):
    def step(self, workspace, observations, history, trial) -> StepResult:
        # Analyze observations, mutate workspace files, optionally run trial tasks
        ...
        return StepResult(accepted=True, score=new_score)

Then pass it to the Evolver:

evolver = ae.Evolver(
    agent="swe-verified",
    benchmark="swe-verified",
    engine=MyEvolutionEngine(config),
)

The engine has full access to shared primitives β€” TrialRunner (on-demand validation), EvolutionHistory (observation + version queries), and VersionControl (git-based rollback) β€” but is never forced to use them. Minimal contract, maximum freedom.


Community & Contributing

A-Evolve is built for the research community. We welcome contributions across every axis of the framework.

For Algorithm Researchers

If you work in LLM self-optimization, reinforcement learning, or agent architectures β€” implement the EvolutionEngine interface and your algorithm instantly gains access to:

  • Diverse environments (SWE-bench, MCP-Atlas, Terminal-Bench 2.0, SkillsBench, and more).
  • Standardized agent workspace representations.
  • Rigorous evaluation, gating, and logging infrastructure.

Drop your algorithm into agent_evolve/algorithms/your_algo/ and open a PR.

For Benchmark Authors

Implement BenchmarkAdapter to plug any new evaluation domain into A-Evolve. The interface is two methods: get_tasks() and evaluate().

Get Involved

  • ⭐ Star this repo to support our research β€” we are evolving fast.
  • πŸ› Open an issue to report bugs or request features.
  • πŸ”€ Submit a PR β€” new evolution algorithms, benchmark adapters, agent implementations, and documentation improvements are all welcome.
  • πŸ’¬ Join our Discord to discuss research directions, share results, and collaborate.

Citation

If you use A-Evolve in your research, please cite our position paper:

@article{lin2026position,
  title={Position: Agentic Evolution is the Path to Evolving LLMs},
  author={Lin, Minhua and Lu, Hanqing and Shi, Zhan and He, Bing and Mao, Rui and Zhang, Zhiwei and Wu, Zongyu and Tang, Xianfeng and Liu, Hui and Dai, Zhenwei and others},
  journal={arXiv preprint arXiv:2602.00359},
  year={2026}
}

License

MIT