README.md

January 31, 2026 ยท View on GitHub

VAD v1 & v2

project page

https://user-images.githubusercontent.com/45144254/229673708-648e8da5-4c70-4346-9da2-423447d1ecde.mp4

https://github.com/hustvl/VAD/assets/45144254/153b9bf0-5159-46b5-9fab-573baf5c6159

VAD: Vectorized Scene Representation for Efficient Autonomous Driving

Bo Jiang1*, Shaoyu Chen1*, Qing Xu2, Bencheng Liao1, Jiajie Chen2, Helong Zhou2, Qian Zhang2, Wenyu Liu1, Chang Huang2, Xinggang Wang1,โ€ 

1 Huazhong University of Science and Technology, 2 Horizon Robotics

*: equal contribution, โ€ : corresponding author.

arXiv Paper, ICCV 2023

News

  • 31 Jan, 2026: VADv2 is accepted by ICLR 2026 ๐ŸŽ‰ !
  • 28 Sep, 2025: RAD is accepted by NeurIPS 2025. Core code for RL training is released at RAD.
  • 27 Feb, 2025: Check out our latest work, DiffusionDrive, accepted by CVPR 2025! This study explores multi-modal end-to-end driving using diffusion models for real-time and real-world applications.
  • 19 Feb, 2025: Checkout our new work RAD ๐Ÿฅฐ, end-to-end autonomous driving with large-scale 3DGS-based Reinforcement Learning post-training.
  • 30 Oct, 2024: Checkout our new work Senna ๐Ÿฅฐ, which combines VAD/VADv2 with large vision-language models to achieve more accurate, robust, and generalizable autonomous driving planning.
  • 20 Sep, 2024: Core code of VADv2 (config and model) is available in the VADv2 folder. Easy to integrade it into the VADv1 framework for training and inference.
  • 17 June, 2024: CARLA implementation of VADv1 is available on Bench2Drive.
  • 20 Feb, 2024: VADv2 is available on arXiv paper project page.
  • 1 Aug, 2023: Code & models are released!
  • 14 July, 2023: VAD is accepted by ICCV 2023๐ŸŽ‰! Code and models will be open source soon!
  • 21 Mar, 2023: We release the VAD paper on arXiv. Code/Models are coming soon. Please stay tuned! โ˜•๏ธ

Introduction

VAD is a vectorized paradigm for end-to-end autonomous driving.

  • We propose VAD, an end-to-end unified vectorized paradigm for autonomous driving. VAD models the driving scene as a fully vectorized representation, getting rid of computationally intensive dense rasterized representation and hand-designed post-processing steps.
  • VAD implicitly and explicitly utilizes the vectorized scene information to improve planning safety, via query interaction and vectorized planning constraints.
  • VAD achieves SOTA end-to-end planning performance, outperforming previous methods by a large margin. Not only that, because of the vectorized scene representation and our concise model design, VAD greatly improves the inference speed, which is critical for the real-world deployment of an autonomous driving system.

Models

MethodBackboneavg. L2avg. Col.FPSConfigDownload
VAD-TinyR500.780.3816.8configmodel
VAD-BaseR500.720.224.5configmodel

Results

  • Open-loop planning results on nuScenes. See the paper for more details.
MethodL2 (m) 1sL2 (m) 2sL2 (m) 3sCol. (%) 1sCol. (%) 2sCol. (%) 3sFPS
ST-P31.332.112.900.230.621.271.6
UniAD0.480.961.650.050.170.711.8
VAD-Tiny0.460.761.120.210.350.5816.8
VAD-Base0.410.701.050.070.170.414.5
  • Closed-loop simulation results on CARLA.
MethodTown05 Short DSTown05 Short RCTown05 Long DSTown05 Long RC
CILRS7.4713.403.687.19
LBC30.9755.017.0532.09
Transfuser*54.5278.4133.1556.36
ST-P355.1486.7411.4583.15
VAD-Base64.2987.2630.3175.20

*: LiDAR-based method.

Getting Started

Catalog

  • Code & Checkpoints Release
  • Initialization

Contact

If you have any questions or suggestions about this repo, please feel free to contact us (bjiang@hust.edu.cn, outsidercsy@gmail.com).

Citation

If you find VAD useful in your research or applications, please consider giving us a star ๐ŸŒŸ and citing it by the following BibTeX entry.

@article{jiang2023vad,
  title={VAD: Vectorized Scene Representation for Efficient Autonomous Driving},
  author={Jiang, Bo and Chen, Shaoyu and Xu, Qing and Liao, Bencheng and Chen, Jiajie and Zhou, Helong and Zhang, Qian and Liu, Wenyu and Huang, Chang and Wang, Xinggang},
  journal={ICCV},
  year={2023}
}

@article{chen2024vadv2,
  title={Vadv2: End-to-end vectorized autonomous driving via probabilistic planning},
  author={Chen, Shaoyu and Jiang, Bo and Gao, Hao and Liao, Bencheng and Xu, Qing and Zhang, Qian and Huang, Chang and Liu, Wenyu and Wang, Xinggang},
  journal={arXiv preprint arXiv:2402.13243},
  year={2024}
}

License

All code in this repository is under the Apache License 2.0.

Acknowledgement

VAD is based on the following projects: mmdet3d, detr3d, BEVFormer and MapTR. Many thanks for their excellent contributions to the community.