README.md

September 27, 2025 Β· View on GitHub

πŸš€ CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient

Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient
Zigeng Chen, Xinyin Ma, Gongfan Fang, Xinchao Wang
xML Lab, National University of Singapore
πŸ₯―[Paper]πŸŽ„[Project Page]


1.7x Speedup and 0.5x memory consumption on ImageNet-256 generation. Top: original VAR-d30; Bottom: CoDe N=8. Speed ​​measurement does not include vae decoder

πŸ’‘ Introduction

We propose Collaborative Decoding (CoDe), a novel decoding strategy tailored for the VAR framework. CoDe capitalizes on two critical observations: the substantially reduced parameter demands at larger scales and the exclusive generation patterns across different scales. Based on these insights, we partition the multi-scale inference process into a seamless collaboration between a large model and a small model. This collaboration yields remarkable efficiency with minimal impact on quality: CoDe achieves a 1.7x speedup, slashes memory usage by around 50%, and preserves image quality with only a negligible FID increase from 1.95 to 1.98. When drafting steps are further decreased, CoDe can achieve an impressive 2.9x acceleration, reaching over 41 images/s at 256x256 resolution on a single NVIDIA 4090 GPU, while preserving a commendable FID of 2.27. figure figure

πŸ”₯Updates

  • πŸŽ‰ Feburary 27, 2025: CoDe is accepted by CVPR 2025!
  • πŸ”₯ November 28, 2024: Our paper is available now!
  • πŸ”₯ November 27, 2024: Our model weights are available at πŸ€— huggingface here
  • πŸ”₯ November 27, 2024: Code repo is released! Arxiv paper will come soon!

πŸ”§ Installation

  1. Install torch>=2.0.0.
  2. Install other pip packages via pip3 install -r requirements.txt.

πŸ’» Model Zoo

We provide drafter VAR models and refiner VAR models, which are on or can be downloaded from the following links:

Draft stepRefine stepreso.FIDISDrafter VARπŸ€—Refiner VARπŸ€—
9 steps1 steps2561.94296drafter_9.pthrefiner_9.pth
8 steps2 steps2561.98302drafter_8.pthrefiner_8.pth
7 steps3 steps2562.11303drafter_7.pthrefiner_7.pth
6 steps4 steps2562.27297drafter_6.pthrefiner_6.pth

Note: The VQVAE vae_ch160v4096z32.pth is also needed.

⚑ Inference

Original VAR Inference:

CUDA_VISIBLE_DEVICES=0 python infer_original.py --model_depth 30

πŸš€ Training-free CoDe:

CUDA_VISIBLE_DEVICES=0 python infer_CoDe.py --drafter_depth 30 --refiner_depth 16 --draft_steps 8 --training_free 

πŸš€ Speciliazed Fine-tuned CoDe:

CUDA_VISIBLE_DEVICES=0 python infer_CoDe.py --drafter_depth 30 --refiner_depth 16 --draft_steps 8
  • drafter_depth: The depth of the large drafter transformer model.
  • refiner_depth: The depth of the small refiner transformer model.
  • draft_steps: Number of steps for the drafting stage.
  • training_free: Enabling training-free CoDe or inference with specialized finetuned CoDe.

⚑ Sample & Evaluations

Sampling 50000 images (50 per class) with CoDe

CUDA_VISIBLE_DEVICES=0 python sample_CoDe.py --drafter_depth 30 --refiner_depth 16 --draft_steps 8 --output_path <img_save_path>

The generated images are saved as both .PNG and .npz. Then use the OpenAI's FID evaluation toolkit and reference ground truth npz file of 256x256 to evaluate FID, IS, precision, and recall.

πŸš€ Visualization Results

Qualitative Results

figure

Zero-short Inpainting&Editing (N=8)

figure

Acknowlegdement

Thanks to VAR for their wonderful work and codebase!

Citation

If our research assists your work, please give us a star ⭐ or cite us using:

@inproceedings{chen2025collaborative,
  title={Collaborative decoding makes visual auto-regressive modeling efficient},
  author={Chen, Zigeng and Ma, Xinyin and Fang, Gongfan and Wang, Xinchao},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={23334--23344},
  year={2025}
}