README.md
February 22, 2026 · View on GitHub
[CVPR 2026] ReasonMap: Towards Fine-Grained Visual Reasoning from Transit Maps
The first benchmark using real-world metro maps
This repository is for our paper:
ReasonMap: Towards Fine-Grained Visual Reasoning from Transit Maps
Sicheng Feng1,2,^, Song Wang3,2,^, Shuyi Ouyang3,2, Lingdong Kong2, Zikai Song4,2, Jianke Zhu3, Huan Wang1,*, Xinchao Wang2
1Westlake University, Hangzhou, China
2National University of Singapore, Singapore
3Zhejiang University, Hangzhou, China
4Huazhong University of Science and Technology, Wuhan, China
^Equal contribution, ∗Corresponding author: wanghuan@westlake.edu.cn
🙋 Please let us know if you find out a mistake or have any suggestions!
🌟 If you find this resource helpful, please consider to star this repository and cite our research!
Updates
- 2026-02-21: 🚀 Our paper was accepted by CVPR 2026! Thanks to all contributors!
- 2026-01-26: 🚀 The following research (RewardMap) has been accepted by ICLR 2026!
- 2025-09-30: 🚀 We released ReasonMap-Plus for the following research - RewardMap!
- 2025-05-15: 🚀 We released evaluation code and our website online!
- 2025-05-15: 🚀 We released ReasonMap!
Usage
1. Install dependencies
If you face any issues with the installation, please feel free to open an issue. We will try our best to help you.
conda env create -f reasonmap-py310.yaml
2. Download the dataset
You can download ReasonMap and ReasonMap-Plus from HuggingFace.
3. Evaluation
You can evaluate the model performance on ReasonMap by running the following command:
## ReasonMap Evaluation
# open-source models
bash script/run.sh
# closed-source models
bash script/run-closed-models.sh
## ReasonMap-Plus Evaluation
bash script/run_plus.sh
# after running the above scripts, you can analyze the results by:
python cal_metrics.py
Citation
If you find this benchmark useful in your research, please consider citing our paper:
@article{feng2025can,
title={Can MLLMs Guide Me Home? A Benchmark Study on Fine-Grained Visual Reasoning from Transit Maps},
author={Feng, Sicheng and Wang, Song and Ouyang, Shuyi and Kong, Lingdong and Song, Zikai and Zhu, Jianke and Wang, Huan and Wang, Xinchao},
journal={arXiv preprint arXiv:2505.18675},
year={2025}
}
# further research
@article{feng2025rewardmap,
title={RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning},
author={Feng, Sicheng and Tuo, Kaiwen and Wang, Song and Kong, Lingdong and Zhu, Jianke and Wang, Huan},
journal={arXiv preprint arXiv:2510.02240},
year={2025}
}