NVIDIA Model Optimizer

April 22, 2026 ยท View on GitHub

Banner image

NVIDIA Model Optimizer

Documentation version license

Documentation | Roadmap


NVIDIA Model Optimizer (referred to as Model Optimizer, or ModelOpt) is a library comprising state-of-the-art model optimization techniques including quantization, distillation, pruning, speculative decoding and sparsity to accelerate models.

[Input] Model Optimizer currently supports inputs of a Hugging Face, PyTorch or ONNX model.

[Optimize] Model Optimizer provides Python APIs for users to easily compose the above model optimization techniques and export an optimized quantized checkpoint. Model Optimizer is also integrated with NVIDIA Megatron-Bridge, Megatron-LM and Hugging Face Accelerate for training required inference optimization techniques.

[Export for deployment] Seamlessly integrated within the NVIDIA AI software ecosystem, the quantized checkpoint generated from Model Optimizer is ready for deployment in downstream inference frameworks like SGLang, TensorRT-LLM, TensorRT, or vLLM. The unified Hugging Face export API now supports both transformers and diffusers models.

Latest News

Previous News

Install

To install stable release packages for Model Optimizer with pip from PyPI:

pip install -U nvidia-modelopt[all]

Model Optimizer will download and install additional third-party open source software projects. Review the license terms of these open source projects before use.

To install from source in editable mode with all development dependencies or to use the latest features, run:

# Clone the Model Optimizer repository
git clone git@github.com:NVIDIA/Model-Optimizer.git
cd Model-Optimizer

pip install -e .[dev]

You can also directly use NVIDIA container images, which have Model Optimizer pre-installed:

  • nvcr.io/nvidia/pytorch:<version>-py3
  • nvcr.io/nvidia/nemo:<version>
  • nvcr.io/nvidia/tensorrt-llm/release:<version>
  • nvcr.io/nvidia/tensorrt:<version>-py3

Before pulling and using the container images, please review their respective license terms. Make sure to upgrade Model Optimizer to the latest version as described above. Visit our installation guide for more fine-grained control on installed dependencies or for alternative docker images and environment variables to setup.

Techniques

TechniqueDescriptionExamplesDocs
Post Training QuantizationCompress model size by 2x-4x, speeding up inference while preserving model quality![LLMs] [diffusers] [VLMs] [onnx] [windows][docs]
Quantization Aware TrainingRefine accuracy even further with a few training steps![Hugging Face][docs]
PruningReduce your model size and accelerate inference by removing unnecessary weights![General] [Megatron-Bridge]
DistillationReduce deployment model size by teaching small models to behave like larger models![Megatron-Bridge] [Megatron-LM] [Hugging Face][docs]
Speculative DecodingTrain draft modules to predict extra tokens during inference![Megatron] [Hugging Face][docs]
SparsityEfficiently compress your model by storing only its non-zero parameter values and their locations[PyTorch][docs]

Pre-Quantized Checkpoints

Resources

Model Support Matrix

Model TypeSupport Matrix
LLM QuantizationView Support Matrix
Diffusers QuantizationView Support Matrix
VLM QuantizationView Support Matrix
ONNX QuantizationView Support Matrix
Windows QuantizationView Support Matrix
Quantization Aware TrainingView Support Matrix
PruningView Support Matrix
DistillationView Support Matrix
Speculative DecodingView Support Matrix

Contributing

Model Optimizer is now open source! We welcome any feedback, feature requests and PRs. Please read our Contributing guidelines for details on how to contribute to this project.

Top Contributors

Contributors

Happy optimizing!