README.md
April 29, 2026 · View on GitHub
⚡️🎉A PyTorch-native Inference Engine with Cache,
Parallelism, Quantization for Diffusion Transformers
**🤗Why Cache-DiT❓❓**Cache-DiT is built on top of the 🤗Diffusers library and now supports nearly ALL DiTs from Diffusers. It provides hybrid cache acceleration (DBCache, TaylorSeer, SCM, etc.) and comprehensive parallelism optimizations, including Context Parallelism, Tensor Parallelism, hybrid 2D or 3D parallelism, and dedicated extra parallelism support for Text Encoder, VAE, and ControlNet.
Cache-DiT is compatible with compilation, CPU Offloading, and quantization, fully integrates with SGLang Diffusion, vLLM-Omni, TensorRT-LLM, ComfyUI, and runs natively on NVIDIA GPUs, Ascend NPUs and AMD GPUs. Cache-DiT is fast, easy to use, and flexible for various DiTs (online docs at 📘readthedocs.io).
⚡️9x speedup by Cache-DiT with Cache, Context Parallelism and Compilation
🚀Quick Start: Cache, Parallelism and Quantization
First, you can install the cache-dit from PyPI or install from source:
uv pip install -U cache-dit # or, uv pip install git+https://github.com/vipshop/cache-dit.git
Then, try to accelerate your DiTs with just ♥️one line♥️ of code ~
>>> import cache_dit
>>> from diffusers import DiffusionPipeline
>>> pipe = DiffusionPipeline.from_pretrained(...).to("cuda")
>>> cache_dit.enable_cache(pipe) # Cache Acceleration with One-line code.
>>> from cache_dit import DBCacheConfig, ParallelismConfig
>>> cache_dit.enable_cache( # Or, Hybrid Cache Acceleration + Parallelism.
... pipe, cache_config=DBCacheConfig(), # w/ default
... parallelism_config=ParallelismConfig(ulysses_size=2))
>>> from cache_dit import DBCacheConfig, ParallelismConfig, QuantizeConfig
>>> cache_dit.enable_cache( # Or, Hybrid Cache + Parallelism + Quantization.
... pipe, cache_config=DBCacheConfig(), # w/ default
... parallelism_config=ParallelismConfig(ulysses_size=2),
... quantize_config=QuantizeConfig(quant_type=...))
>>> output = pipe(...) # Then, just call the pipe as normal.
🚀Quick Start: SVDQuant (W4A4) PTQ/DQ workflow
First, build Cache-DiT from source with SVDQuant support (Experimental):
git clone https://github.com/vipshop/cache-dit.git && cd cache-dit
CACHE_DIT_BUILD_SVDQUANT=1 uv pip install -e ".[quantization]" --no-build-isolation
Then, try to quantize your model with just ♥️a few lines♥️ of codes ~
>>> from cache_dit import QuantizeConfig
>>> pipe = DiffusionPipeline.from_pretrained(...).to("cuda")
>>> # Apply quantization with `cache_dit.quantize(...)` API.
>>> pipe.transformer = cache_dit.quantize(
... pipe.transformer, quant_config=QuantizeConfig(
... quant_type="svdq_int4_r128_dq", # _r{rank}, e.g., r16, r32, r64, r128, etc.
... svdq_kwargs={"smooth_strategy": "few_shot"}))
>>> output = pipe(...) # Then, just call the pipe as normal.
For more advanced features, please refer to our online documentation at 📘Documentation.
🌐Community Integration
- 🎉ComfyUI x Cache-DiT
- 🎉(Intel) llm-scaler x Cache-DiT
- 🎉Diffusers x Cache-DiT
- 🎉TensorRT-LLM x Cache-DiT
- 🎉SGLang Diffusion x Cache-DiT
- 🎉vLLM-Omni x Cache-DiT
- 🎉Nunchaku x Cache-DiT
- 🎉SD.Next x Cache-DiT
- 🎉stable-diffusion.cpp x Cache-DiT
- 🎉jetson-containers x Cache-DiT
©️Acknowledgements
Special thanks to vipshop's Computer Vision AI Team for supporting testing and deployment of this project. We learned and reused codes from: Diffusers, SGLang, vLLM-Omni, Nunchaku, xDiT and TaylorSeer.
©️Citations
@misc{cache-dit@2025,
title={Cache-DiT: A PyTorch-native Inference Engine with Cache, Parallelism and Quantization for Diffusion Transformers.},
url={https://github.com/vipshop/cache-dit.git},
note={Open-source software available at https://github.com/vipshop/cache-dit.git},
author={DefTruth, vipshop.com, etc.},
year={2025}
}