Fast-dLLM

May 6, 2026 Β· View on GitHub

Project arXiv v1 arXiv v2 arXiv dVLM  

ICLR 2026

Fast-dLLM is a family of acceleration techniques for diffusion-based Large Language Models (dLLMs) and Vision-Language Models (dVLMs). This repository contains:

Fast-dLLM v1Fast-dLLM v2Fast-dVLM
PaperTraining-free Acceleration of Diffusion LLMEfficient Block-Diffusion LLMBlock-Diffusion VLM via Direct Conversion
ModalityTextTextVision + Text
ApproachTraining-free inference accelerationBlock diffusion with fine-tuningDirect AR-to-diffusion VLM conversion
BackboneDream, LLaDAQwen2.5Qwen2.5-VL
Key TechniquesKV Cache + Parallel DecodingBlock Diffusion + Hierarchical CachingBlock-Size Annealing + Speculative Decoding
Codev1/v2/fast_dvlm/
Modelβ€”Fast_dLLM_v2_7BFast_dVLM_3B

News

  • (πŸ”₯ New) [2026/04/10] Fast-dVLM is released! Up to 6.18x speedup over AR baseline while matching quality across 11 benchmarks. Check out our webpage, model, and paper!
  • (πŸ”₯ New) [2026/01/26] Fast-dLLM v1/v2 is accepted by ICLR-2026. πŸŽ‰πŸŽ‰πŸŽ‰
  • [2025.10.08] We have open sourced Fast-dLLM v2. Have a look at our webpage, model, and paper!
  • [2025.08.01] Our new online demo of Fast-dLLM: https://fast-dllm.hanlab.ai/, welcome to try!
  • [2025.07.06] Added factor-based parallel strategy and LLaDA-1.5 evaluation in v1/llada/eval_gsm8k.sh.
  • [2025.07.04] We updated our paper with latest improvements and evaluation results.
  • [2025.06.30] Fast-dLLM has been integrated into LLaDA-V. With Fast-dLLM, it accelerates the inference latency from 60s to 6s! Have a try here!!

TODOs

  • [βœ…] Inference and evaluation code
  • [βœ…] Training code of Fast-dLLM v2
  • [βœ…] Fast-dVLM: Block-diffusion VLM
  • [πŸš€] vLLM support

Project Structure

Fast-dLLM/
β”œβ”€β”€ v1/                     # Fast-dLLM v1: Training-free acceleration (LLM)
β”‚   β”œβ”€β”€ dream/              #   Dream model support
β”‚   β”œβ”€β”€ llada/              #   LLaDA model support
β”‚   β”œβ”€β”€ requirements.txt
β”‚   └── README.md
β”œβ”€β”€ v2/                     # Fast-dLLM v2: Block diffusion (LLM)
β”‚   β”œβ”€β”€ src/                #   LMFlow training framework
β”‚   β”œβ”€β”€ train_scripts/      #   Fine-tuning scripts
β”‚   β”œβ”€β”€ configs/            #   DeepSpeed configs
β”‚   β”œβ”€β”€ generation_functions.py
β”‚   β”œβ”€β”€ eval.py / eval_script.sh
β”‚   β”œβ”€β”€ app.py / run_chatbot.py
β”‚   β”œβ”€β”€ requirements.txt
β”‚   └── README.md
β”œβ”€β”€ fast_dvlm/              # Fast-dVLM: Block-diffusion VLM (chatbot, optional finetune sample, VLMEval; see fast_dvlm/README.md)
β”œβ”€β”€ CONTRIBUTING.md
β”œβ”€β”€ LICENSE
└── README.md               # This file

Quick Start

Fast-dLLM v1 (Training-free Acceleration)

cd v1
pip install -r requirements.txt

# LLaDA interactive chat
python llada/chat.py --gen_length 128 --steps 128 --block_size 32

# Dream evaluation
accelerate launch dream/eval.py --model dream \
    --model_args pretrained=Dream-org/Dream-v0-Base-7B,max_new_tokens=256,diffusion_steps=8,add_bos_token=true,alg=confidence_threshold,threshold=0.9,use_cache=true \
    --tasks gsm8k --num_fewshot 5 --batch_size 1

For full details, see v1/README.md.

Fast-dLLM v2 (Block Diffusion)

cd v2
pip install -e .

# Gradio web demo
python app.py

# Evaluation
bash eval_script.sh

For full details, see v2/README.md.

Fast-dVLM (Block-Diffusion VLM)

cd fast_dvlm
pip install -r requirements.txt

# Quick inference
python run_chatbot.py \
    --model-name Efficient-Large-Model/Fast_dVLM_3B \
    --image path/to/image.jpg \
    --prompt "Describe this image in detail."

# Interactive mode
python run_chatbot.py

Fine-tuning (optional example): multimodal MDM training uses DeepSpeed + the LMFlow fork under third_party/ (the launcher sets PYTHONPATH for you). Download ALLaVA-4V with fast_dvlm/data/download_example_dataset.sh, then run bash fast_dvlm/train_scripts/finetune_multimodal_example.sh from the repo rootβ€”see Fine-tuning (example launcher) in fast_dvlm/README.md.

For full details, see fast_dvlm/README.md.

Contributing

Issues and Pull Requests are welcome! Please see CONTRIBUTING.md for details.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.

Citation

If you find this work useful, please cite our papers:

@misc{wu2026fastdvlmefficientblockdiffusionvlm,
      title={Fast-dVLM: Efficient Block-Diffusion VLM via Direct Conversion from Autoregressive VLM},
      author={Chengyue Wu and Shiyi Lan and Yonggan Fu and Sensen Gao and Jin Wang and Jincheng Yu and Jose M. Alvarez and Pavlo Molchanov and Ping Luo and Song Han and Ligeng Zhu and Enze Xie},
      year={2026},
      eprint={2604.06832},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2604.06832},
}
@misc{wu2025fastdllmv2efficientblockdiffusion,
      title={Fast-dLLM v2: Efficient Block-Diffusion LLM}, 
      author={Chengyue Wu and Hao Zhang and Shuchen Xue and Shizhe Diao and Yonggan Fu and Zhijian Liu and Pavlo Molchanov and Ping Luo and Song Han and Enze Xie},
      year={2025},
      eprint={2509.26328},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.26328}, 
}
@misc{wu2025fastdllmtrainingfreeaccelerationdiffusion,
      title={Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding}, 
      author={Chengyue Wu and Hao Zhang and Shuchen Xue and Zhijian Liu and Shizhe Diao and Ligeng Zhu and Ping Luo and Song Han and Enze Xie},
      year={2025},
      eprint={2505.22618},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.22618}, 
}

Acknowledgements

We would like to thank the authors of LLaDA and Dream for their excellent work and open-source contributions. We thank Qwen2.5 and Qwen2.5-VL for the base model architectures and LMFlow for the training framework.