TreeRL: LLM Reinforcement Learning with On-Policy Tree Search

June 16, 2025 ยท View on GitHub

Implementation for ACL'25 paper TreeRL: LLM Reinforcement Learning with On-Policy Tree Search. The implementation is based on OpenRLHF

TreeRL is a reinforcement learning framework that directly incorporates on-policy tree search for training, eliminating the need for separate reward model training while providing better exploration of reasoning space through strategic branching from high-uncertainty steps. Experiments on math and code reasoning benchmarks demonstrate that TreeRL can achieve consistent better performance than ChainRL under the same expeeriment setting.


Getting Started

Currently, we provide the following code of PRIME, you can find more details in each directory.

  • scripts provides the script to start the training. And the RL implemenration is in openrlhf.
  • EPTree: the tree sampling implementation.
  • The training data can be found in datasets(datasets). We use the data from T1 for SFT.

Experimental Results

ModelMATH500Omni-MATH-500AIME2024AMCOlympiad BenchLiveCode BenchAvg
GPT-4o76.626.89.345.843.329.538.638.6
Llama-3.1-8B-Instruct52.815.010.922.615.611.621.421.4
Llama-3.3-70B-Instruct73.927.924.250.935.725.539.739.7
GLM4-9B-chat50.112.91.717.214.716.518.918.9
Qwen-2.5-7B-Instruct76.526.013.341.935.016.834.934.9
Qwen-2.5-Math-7B-Instruct82.729.716.750.640.78.138.138.1
Qwen-2.5-14B-Instruct78.928.713.754.541.827.740.940.9
SFT (GLM-9B)56.018.28.329.222.514.224.724.7
ChainRL (GLM-9B)63.021.86.131.623.916.627.227.2
TreeRL (GLM-9B)64.520.811.438.524.815.829.329.3
SFT (Qwen-2.5-14B)76.629.510.648.036.914.536.036.0
ChainRL (Qwen-2.5-14B)81.632.722.253.941.118.241.641.6
TreeRL (Qwen-2.5-14B)81.736.728.055.944.620.844.544.5
SFT (R1-Distilled-Qwen-2.5-7B)94.047.855.985.554.443.963.663.6
ChainRL (R1-Distilled-Qwen-2.5-7B)93.648.159.785.554.546.164.564.5
TreeRL (R1-Distilled-Qwen-2.5-7B)94.449.860.885.057.147.465.865.8

Citing

If you find this work is helpful to your research, please consider citing our paper:

@inproceedings{treerl,
  title={TreeRL: LLM Reinforcement Learning with On-Policy Tree Search},
  author={Hou, Zhenyu* and Hu, Ziniu* and Li, Yujiang* and Lu, Rui* and Tang, Jie and Dong, Yuxiao},
  booktitle={ACL},
  year={2025}
}