Model Optimizer Benchmark Reference

December 7, 2025 · View on GitHub

This document summarizes performance and accuracy measurements of Model Optimizer for a few popular models. The benchmark in the following tables is provided as reference points and should not be considered as the peak performance that can be delivered by Model Optimizer. All performance numbers are tested with TensorRT-LLM or TensorRT.

1. Post-training quantization (PTQ) for LLMs

1.1 Performance

Config: H200, nvidia-modelopt v0.21.1, TensorRT-LLM v0.15, latency measured with trtllm-bench. Inference speedup are compared to the BF16 baseline. Speedup is normalized to the GPU count.

Benchmark scenario: Input tokens 2048, output tokens 128. Real performance may vary based on the target usecases and flags used to build the TensorRT-LLM engine.

Memory saving is not reported here as TensorRT-LLM occupies all the remaining available GPU memory for KV caching.

If the GPU memory is the limitation, lower bit quantization may have better GPU-count-normalized throughput gain with fewer TP.

BF16 (8B:TP1, 70B:TP2)FP8 (TP1)INT4 AWQ (TP1)W4A8 AWQ (TP1)
ModelBatch SizeTokens/secTokens/secSpeedupTokens/secSpeedupTokens/secSpeedup
Llama3.1-8B1173.80245.031.41x231.751.33x239.701.38x
8803.111,051.171.31x599.720.75x801.721.00x
641,679.742,190.931.30x1,392.780.83x1,930.861.15x
Llama3.1-70B145.8143.461.90x44.101.93x46.312.02x
8182.61182.071.99x93.981.03x140.021.53x
64401.50420.642.10x176.680.88x345.431.72x

1.2 Accuracy

The table below shows the MMLU loss in percentage compared to BF16 baseline. Config: H100, nvidia-modelopt v0.21.1, TenorR-LLM v0.15. Note that typically FP8 is the go-to choices for H100. 4-bit AWQ methods is recommended when GPU memory is a constraint. More benchmark with earlier version of Model Optimizer can be found in this TensorRT-LLM README.

ModelMMLU loss FP8MMLU loss INT4 AWQMMLU loss W4A8 AWQ
Llama3.1-8B (instruct)1.50%5.66%6.00%
Llama3.1-70B (instruct)0.38%1.07%1.20%

2. PTQ for Stable Diffusion

The following table shows inference speedup for INT8 and FP8 on a Stable Diffusion XL 1.0 base model compared to the FP16 baseline. Config: Image resolution=1024×1024, 30 steps. TensorRT v9.3. num-warmup-runs=1. Batch size=1.

GPUINT8 Latency (ms)FP8 Latency (ms)Speedup (INT8 v.s. FP16)Speedup (FP8 v.s. FP16)
RTX 6000 Ada2,479.192,441.161.43x1.45x
RTX 40902,058.112,161.381.20x1.14x
L40S2,338.882,167.821.25x1.35x

3. Quantization-aware training

The below table demonstrates the validation loss of Quantization-aware training (QAT) compared to PTQ of a Llama 2 7B model using nvidia-modelopt v0.11.0. The baseline is fine-tuned on the target dataset. Note that we use INT4 to showcase that QAT can better preserve model accuracy at low precision. This implies that QAT can be applied with a low training cost, enabling generative AI applications that are sensitive to accuracy drop to preserve accuracy even at ultra-low precisions where both weight and activations are 4-bit for NVIDIA Blackwell platform.

MethodDatasetVal loss - BF16 BaselineVal loss - PTQVal loss - QAT (lower is better)
INT4 Weight, FP16 Activationsamsum1.0361.0591.044
INT4 Weight, INT8 Activationsamsum1.0363.3211.294
INT4 Weight, FP16 Activationdatabricks-dolly-15k1.1511.3051.172
INT4 Weight, INT8 Activationdatabricks-dolly-15k1.1512.3131.640

4. Sparsity

4.1 Performance

The table shows the inference speedup of a sparsified Llama 2 70B model compared to the baseline dense model in different batch sizes. The benchmark with batch_size=896 is part of MLPerf Inference v4.0. Config: NVIDIA H100 80GB GPU. FP8, TP=1, PP=1 for all sparsified models. The dense model needs TP=2 due to larger weight sizes.

Batch SizeInference speedup (compared to the FP8 dense model)
321.62x
641.52x
1281.35x
8961.30x

4.2 Accuracy

We recommend using sparsity with fine-tuning to avoid accuracy degradation. The following table shows the comparison of validation loss of a Llama 2 70B using sparsity with and without fine-tuning. Finetuning and validation are done on the Open-Orca dataset.

MethodValidation loss (lower is better)
FP8 (baseline)0.721
FP8 + SparseGPT, no fine-tuning2.724
FP8 + Sparsity, with fine-tuning1.01