Fetch

February 19, 2025 ยท View on GitHub

Code for the paper Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration Pitfalls


๐Ÿš€ Setup

Follow the steps below to run our scripts:

๐Ÿ“Œ Step 1. Setup service of policy, verifier, and embedding model

๐Ÿ“š Policy

We employ vllm for the policy. To start the policy service, run the following command:

python3 -m vllm.entrypoints.openai.api_server --model /path/to/policy/model --port 8000 --dtype float16 --tensor-parallel-size 2 --swap-space 8 --max-model-len 4096

๐Ÿ” Verifier

  1. Update your model path in verifier/server.py.
  2. Run the script: bash run.sh ./ 0 inside the verifier directory.

๐Ÿ“ฆ Embedding Model

If you're using state merging, follow these steps:

  1. Update the path in cluster/server_cluster.py.
  2. Run the script: bash run_app.sh ./ 0 inside the cluster directory.

๐Ÿ“Œ Step 2. Run tree search algorithms

We provide three tree search algorithms: BFS (Best-First Search), Beam Search, and MCTS (Monte Carlo Tree Search).

  1. Specify the input, output file paths, and other parameters in scripts such as beamsearch.py.

  2. Simply execute the corresponding Python script. For instance, to run Beam Search: python3 beamsearch.py


๐ŸŽฏ Tips


๐Ÿ“ Citation

If you find our work useful, please cite our paper:

@misc{wang2025dontlosttreesstreamlining,
      title={Don't GetLost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration Pitfalls}, 
      author={Ante Wang and Linfeng Song and Ye Tian and Dian Yu and Haitao Mi and Xiangyu Duan and Zhaopeng Tu and Jinsong Su and Dong Yu},
      year={2025},
      eprint={2502.11183},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.11183}, 
}