Puzzletron Distillation Results

April 30, 2026 ยท View on GitHub

The following MMLU results demonstrate knowledge distillation on student models that were first compressed using Puzzletron. The original (uncompressed) model serves as the teacher, and distillation recovers accuracy lost during compression.

Qwen3-8B compressed to 80% of original

The student was created by compressing Qwen3-8B to 80% of its original size using Puzzletron.

ModelMMLUHumanitiesOtherSocial SciSTEM
Student (before distillation)0.59100.50460.63630.68310.5855
Student (after distillation)0.69210.59060.73160.79750.7016
Teacher (original Qwen3-8B)0.74930.66480.78560.83850.7526

MMLU accuracy improved from 59.10% to 69.21% (+10.11 pp) after distillation with just 100 iterations on WikiText-103, recovering 64% of the gap to the teacher model.

Llama-3.1-8B-Instruct compressed to 50% of original

The student was created by compressing Llama-3.1-8B-Instruct to 50% of its original size using Puzzletron.

ModelMMLUHumanitiesOtherSocial SciencesSTEM
Student (before distillation)0.23160.24620.22920.22500.2274
Student (after distillation)0.29600.31460.30850.29250.2768
Teacher (original Llama-3.1-8B-Instruct)0.68390.72310.70380.76670.5911

Llama-3.1-8B-Instruct compressed to 69% of original (regression)

The student was created by compressing Llama-3.1-8B-Instruct to ~69% of its original size using Puzzletron. This example shows regression due to overfitting on the small WikiText-103 dataset (100 iterations). MMLU was evaluated on a subset of 100 samples per task:

ModelMMLUHumanitiesOtherSocial SciencesSTEM
Student (before distillation)0.66260.70690.68920.75250.5574
Student (after distillation)0.64960.68620.66770.74330.5532
Teacher (original Llama-3.1-8B-Instruct)0.68390.72310.70380.76670.5911

MMLU decreased from 66.26% to 64.96% (-1.30 pp) -- the model overfitted to WikiText-103. This highlights the importance of using larger, more diverse datasets for distillation.

Recommendations

  • Use larger datasets for production distillation (e.g., Nemotron-Pretraining-SFT-v1) to avoid overfitting as shown in the regression case above.
  • Train for more iterations to ensure proper convergence.