Squant

July 3, 2025 ยท View on GitHub

Official repo for the paper: Squat: Quant Small Language Models on the Edge

Implementation

Follow the instructions of the BabyLLaMA to implement the training environment, and BabyLM Challenge to implement the evaluation environment.

Usage

  1. Download dataset from BabyLM Challenge
  2. Clean the dataset according to BabyLLaMA
  3. Pretrain teacher model
  4. Download FP16 LLaMA-58M model from BabyLLaMA
  5. QAT with scripts in distill_train/scripts/
  6. Evaluation with scripts in evaluation_pipeline/

Citation

@article{shen2025squant,
  title={Squat: Quant Small Language Models on the Edge},
  author={Shen, Xuan and Peiyan, Dong and Kong, Zhenglun and Gong, Yifan and Yang, Changdi and Han, Zhaoyang and Xie, Yanyue and Lu, Lei and others},
  journal={arXiv preprint arXiv:2402.10787},
  year={2025}
}

Acknowledgment

Code is mainly based on BabyLLaMA and LSQ.