README.md
April 24, 2026 ยท View on GitHub
AReaL: A Large-Scale Asynchronous Reinforcement Learning System
| Paper | Documentation | ไธญๆๆๆกฃ | Ask DeepWiki | ๐ค Models & Data |
WeChat (ๅพฎไฟก) Group |
AReaL is a reinforcement learning (RL) infrastructure designed to bridge foundation model training with modern agent-based applications. It was originally developed by researchers and engineers from Tsinghua IIIS and the AReaL Team at Ant Group.
Built on a fully asynchronous RL training paradigm, AReaL is optimized for efficiency and scalability, making it particularly well-suited for training large-scale reasoning and agentic models.
AReaLโs mission is to make building AI agents accessible, efficient, and cost-effective for a broad community of developers and researchers.
Like milk tea - customizable, scalable, and enjoyable - we hope AReaL brings both flexibility and delight to your AI development experience. Cheers!
AReaL Highlights
- โก Flexibility: Seamless customization for
agentic RL and
online RL training
for black-box agent applications by simply replacing the
base_url. - ๐ Scalability: Stable fully asynchronous RL training with industry-leading speed.
- โจ Cutting-Edge Performance: State-of-the-art math, coding, search, and customer service agents.
๐ฐ News
[2026/04/23] ๐ Weโre excited to release our integration with Scaffoldings for agentic RL training - now live in our examples! Huge shoutout to @narutolhy and @WeiHaocheng for making this happen ๐. The modular design of the Scaffoldings enables it to achieve a thorough decoupling of agent execution, reward calculation, and trajectory acquisition. This enables developers to reuse existing modules when implementing an agentic RL method, allowing them to focus on their own innovative modules.
[2026/04/18] We are thrilled to announce that AReaL's first Community Biweekly Meeting was successfully held! Thank you to everyone who joined us. Meeting materials are now available here. Our next meeting is scheduled for 2026/05/01 and will also be conducted in Chinese; English-language meetings will be scheduled in the future. We warmly welcome everyone to participate! See Community for more details.
[2026/03/02] We provide a complete example to train your
own ๐ฆ OpenClaw agent by simply replacing the base_url and api_key with AReaL's RL
service - no complicated dependencies, no code changes, works with any agentic runtime!
๐ Previous Releases
[2026/02/06] We are delighted to introduce AReaL-SEA, a self-evolving data synthesis engine. Combined with RL training on AReaL, the 235B MoE model surpasses GPT 5 and achieves comparable performance with Gemini 3.0 Pro on -bench! Check out the paper, model, data, and code.
[2026/01/15] Congrats to our friends at CAMEL-AI for open-sourcing SETA, their terminal agent RL project trained with AReaL! Check out their training workflow and the announcement on X.
[2026/01/01] Happy New Year! Thanks to the outstanding contribution from
@HwVanICI, we are excited to officially announce stable support for AReaL training on
Ascend NPU devices! The code is actively maintained and continuously updated in the
ascend branch. Check out
our documentation
to get started, and feel free to report any issues!
[2025/08/30] Introducing ASearcher, a state-of-the-art search agent built with AReaL's end-to-end asynchronous RL training. Check out the paper and the open-source repository!
[2025/07/31] (AReaL-lite) We introduce AReaL-lite, a lightweight version of AReaL designed specifically for AI researchers and rapid prototyping. AReaL-lite features an algorithm-first API design that prioritizes ease of use and algorithm development, while natively supporting fully asynchronous agentic RL. With 80% fewer lines of code, AReaL-lite maintains 90% of AReaL's performance and core functionality. Check out our AReaL-lite design documentation and the quickstart guide to begin your journey with AReaL-lite!
[2025/06/03] (v0.3, bobaยฒ) We release bobaยฒ (double-boba) for fully asynchronous RL training, which achieves 2.77ร speedup while delivering comparable or superior training performance compared to synchronous systems. Furthermore, asynchronous RL significantly simplifies multi-turn agentic RL training setup! Check out our v0.3 overview blog and the research paper.
[2025/03/31] (v0.2, boba) Introducing our milestone releaseโboba! Please call it A-ReaL-boba! This release features significantly faster training with SGLang support and state-of-the-art 7B and 32B models for mathematical reasoning. Check out our v0.2 technical blog.
[2025/02/24] (v0.1) Our initial release includes reproducible results for 1.5B and 7B Large Reasoning Models (LRMs). Check out our v0.1 technical blog.
๐ Getting Started
First, install the package:
git clone https://github.com/inclusionAI/AReaL
cd AReaL
pip install uv
# Install flash-attn pre-built wheel first to avoid compiling from source
# (pick the wheel matching your Python version; see https://github.com/mjun0812/flash-attention-prebuild-wheels/releases)
uv pip install "https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.16/flash_attn-2.8.3+cu128torch2.9-cp312-cp312-linux_x86_64.whl"
uv sync --extra cuda # installs training packages + SGLang (default inference backend)
# For vLLM instead: cp pyproject.vllm.toml pyproject.toml && cp uv.vllm.lock uv.lock && uv sync --extra cuda
Our training scripts automatically download the required dataset (openai/gsm8k) and model (Qwen/Qwen2-1.5B-Instruct). To run on a single node:
python3 examples/math/gsm8k_rl.py --config examples/math/gsm8k_grpo.yaml scheduler.type=local
If you prefer to run experiments on a Ray cluster, update paths in the YAML file to point to your shared storage, and run:
python3 examples/math/gsm8k_rl.py --config examples/math/gsm8k_grpo.yaml \
cluster.n_nodes=2 cluster.n_gpus_per_node=8 \
cluster.fileroot=/path/to/nfs \
scheduler.type=ray
For comprehensive setup instructions, see our quickstart guide.
๐ Examples
Math & Reasoning
| Task | Description | Performance |
|---|---|---|
| Math | GSM8K math reasoning with GRPO, PPO, DAPO, REINFORCE, RLOO, LitePPO, DR-GRPO, GSPO, and more | - |
| Multi-Turn Math | Multi-turn math agent with reward discounting across turns | Training Curve |
| LoRA Math | Parameter-efficient math training with LoRA (SGLang/vLLM backends) | - |
| Countdown | Countdown numbers game with custom rewards | Training Curve |
Agentic RL
| Task | Description | Performance |
|---|---|---|
| General Agent | General agentic training with any agentic frameworks | Guide |
| Tau2 Customer Service | Customer service agent on Tau2-Bench (retail, airline, telecom) | Paper |
| Search Agent | End-to-end search agent with Tongyi-DeepResearch workflow | Training Curve |
| Tool-Integrated Reasoning | Multi-turn tool calling during reasoning (Python executor, calculator) | Training Curve |
| OpenAI Agents Integration | Integration with OpenAI Agents SDK for agentic workflows | - |
| CAMEL-AI Integration | Integration with CAMEL-AI framework for agentic RL | - |
Vision-Language Models
| Task | Description | Performance |
|---|---|---|
| VLM | Geometry3K and CLEVR Count 70K visual reasoning with GRPO | - |
| VLM on NPU | VLM training on Huawei NPU hardware | Benchmark Results |
Alignment & Infrastructure
| Task | Description | Performance |
|---|---|---|
| RLHF Reward Modeling | Bradley-Terry reward modeling on Anthropic HH-RLHF | Training Curve |
| SkyPilot Deployment | Cloud deployment with SkyPilot (GCP, AWS, Kubernetes) | Screenshots |
๐ง Support Matrix
๐ง Algorithms
All RL algorithms support both asynchronous and synchronous versions by setting
max_head_offpolicyness=0. See Asynchronous RL Guide.
| Algorithm | Documentation | Paper | Configuration |
|---|---|---|---|
| GRPO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| GSPO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| PPO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| DAPO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| LitePPO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| Dr.GRPO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| REINFORCE++ | - | ๐ Paper | ๐ GSM8K Example |
| RLOO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| SAPO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| M2PO | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
| DPO | ๐ Docs | ๐ Paper | ๐ HH-RLHF Example |
| RLHF Reward Modeling | - | - | ๐ RLHF Example |
| SFT | - | - | ๐ GSM8K Example |
| Distillation | ๐ Docs | ๐ Paper | ๐ GSM8K Example |
Models
| Model Family | Megatron | PyTorch FSDP | PyTorch Archon | Notes |
|---|---|---|---|---|
| Qwen2/3 | โ | โ | โ | - |
| Qwen3-MoE | โ | โ | โ | - |
| Qwen2.5-VL | โ | โ | โ | Vision-language model |
| Qwen3-VL | โ | โ | โ | Vision-language model |
| Gemma 3 | โ | โ | โ | Vision-language model |
| Other Hugging Face LLM | โ | โ | โ | Compatibility depending on the version of transformers |
Check the AI Coding Assistant Guide and Archon Reference for how to integrate new models into AReaL.
Training Backends
| Backend | DP | Tensor Parallel | Sequence Parallel within TP | Context Parallel | Pipeline Parallel | Expert Parallel | 1D Sequence Packing | LoRA |
|---|---|---|---|---|---|---|---|---|
| Megatron | โ (ZeRO-1) | โ | โ | โ | โ | โ | โ | โ (with vLLM inference backend) |
| PyTorch FSDP | โ (FSDP2) | โ | โ | โ | โ | โ | โ | โ |
| PyTorch Archon | โ (FSDP2) | โ | โ | โ | โ | โ | โ | โ |
Inference Backends
| Backend | Tensor Parallel | Context Parallel | Pipeline Parallel | Data Parallel Attention | Expert Parallel |
|---|---|---|---|---|---|
| vLLM | โ | โ | โ | โ | โ |
| SGLang | โ | โ | โ | โ | โ |
๐ Resources
Tutorial
Code Walkthrough
Best Practices
- Improving Algorithm Performance
- Agent Workflow Best Practices
- Debugging
- Handling OOM Issues
- Performance Profiling
Customization
Algorithms
Reference
- CLI Configurations
- LoRA RL
- Checkpointing
- Metrics Tracking
- Allocation Mode
- Rollout Workflow
- Agent Workflow
- AI-Assisted Development
๐ค Contributing
We warmly welcome contributions from the community! Whether you're fixing bugs, adding features, improving documentation, or helping others, your contribution is valued. Please check our Contributing Guide for detailed information.
# Fork and clone the repository
git clone https://github.com/YOUR-USERNAME/AReaL
cd AReaL
# Install uv and sync dependencies
pip install uv
# Install flash-attn pre-built wheel to avoid compiling from source
uv pip install "https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.16/flash_attn-2.8.3+cu128torch2.9-cp312-cp312-linux_x86_64.whl"
# Use `--extra cuda` on Linux with CUDA (installs training packages + SGLang)
uv sync --extra cuda --group dev
# For vLLM instead:
# cp pyproject.vllm.toml pyproject.toml && cp uv.vllm.lock uv.lock && uv sync --extra cuda --group dev
# Or without CUDA support
# uv sync --group dev
# Set up pre-commit hooks (formatting, linting, commit message checks)
pre-commit install --install-hooks
# Make changes
git checkout -b feat/gpt-o5
git add .
# `git commit` will automatically check your files and commit messages
git commit -m "feat: implement gpt-o5 training loop"
git push
๐บ๏ธ Future Roadmap
AReaL is under active development with planned minor releases weekly and major releases monthly. We warmly welcome community engagement and contributions. We are also actively hiring interns and full-time employees with open positions in both the US and China.
๐ Acknowledgments
We gratefully acknowledge that major contributors are from the AReaL Team at the Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University and Ant Group.
We have also received invaluable assistance from the following groups (listed alphabetically):
-
The Data Intelligence Lab at Ant Research for their data support
-
@HwVanICI for support on vLLM, LoRA, NPU integration, and more
-
The Relaxed System Lab at HKUST for seamless collaboration on numerous system-related aspects
-
The SGLang team for supporting custom weight update features and their contributions during AReaL-lite development
-
The Super Computing Technology (SCT) team at Ant Group for their expertise in large-scale cluster operations and maintenance
-
Special thanks to @Lyken17 for providing valuable suggestions throughout the API design process
We also deeply appreciate all pioneering work from the community, particularly the ReaLHF project from OpenPsi Inc. and other outstanding projects, including but not limited to DeepScaleR, Open-Reasoner-Zero, OpenRLHF, VeRL, SGLang, QwQ, Light-R1, and DAPO.
๐ License
This project is licensed under the Apache License 2.0.
๐ Citation
@inproceedings{mei2025real,
author = {Mei, Zhiyu and Fu, Wei and Li, Kaiwei and Wang, Guangju and Zhang, Huanchen and Wu, Yi},
title = {ReaL: Efficient RLHF Training of Large Language Models with Parameter Reallocation},
booktitle = {Proceedings of the Eighth Conference on Machine Learning and Systems,
MLSys 2025, Santa Clara, CA, USA, May 12-15, 2025},
publisher = {mlsys.org},
year = {2025},
}
@misc{fu2025areal,
title={AReaL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning},
author={Wei Fu and Jiaxuan Gao and Xujie Shen and Chen Zhu and Zhiyu Mei and Chuyi He and Shusheng Xu and Guo Wei and Jun Mei and Jiashu Wang and Tongkai Yang and Binhang Yuan and Yi Wu},
year={2025},
eprint={2505.24298},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.24298},
}