MobileLLM

April 30, 2026 ยท View on GitHub

This repository contains the training code of MobileLLM introduced in our work: "MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases", published in ICML 2024.

In this work, we comprehensively consider multiple design factors to obtain high-quality LLMs with fewer than a billion parameters. We integrated (1) SwiGLU activation function, (2) deep and thin architectures, (3) embedding sharing, (4) grouped-query attention to build MobileLLM. MobileLLM-125M/350M attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M SoTA models on zero-shot commonsense reasoning tasks. In our updated version, we further demonstrate that our design philosophy scales effectively to larger models, with SoTA results for MobileLLM-600M/1B/1.5B.

News

  • Jan 2026: ๐Ÿ”ฅ MobileLLM-R1 has been accepted to ICLR 2026.
  • Nov 2025: ๐ŸŒŸ MobileLLM-R1.5 is released. MobileLLM-R1.5-950M outperforms DeepSeek-R1-Distill-Qwen-1.5B on all evaluated math and coding benchmarks, despite having significantly fewer parameters (0.95B vs. 1.5B).
  • Sept 2025: ๐Ÿ”ฅ Our follow-up work, MobileLLM-R1 is released. With only ~2T pretraining tokens (<5T total), it matches or surpasses Qwen3-0.6B (36T tokens) on MATH, GSM8K, MMLU, and LiveCodeBench. All code, models, data, and training recipes are released. HuggingFace
  • Oct 30, 2024: ๐Ÿš€ MobileLLM models are publicly available on HuggingFace

Citation

If you find our code useful for your research, please consider citing:

@article{liu2024mobilellm,
    title={MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases},
    author={Liu, Zechun and Zhao, Changsheng and Iandola, Forrest and Lai, Chen and Tian, Yuandong and Fedorov, Igor and Xiong, Yunyang and Chang, Ernie and Shi, Yangyang and Krishnamoorthi, Raghuraman and others},
    journal={arXiv preprint arXiv:2402.14905},
    year={2024}
}

Run

Step 1. Requirements:

  • python 3.9, pytorch >= 2.0
  • pip install -r requirement.txt

Step 2. Data preprocessing

Dividing a tokenized dataset or tokenize your own dataset, and even distribute it across the total number of training nodes, where each node comprises 1x8 GPUs. Next, organize the data into the following structure:

  • basepath
    • 1
      • xxx.jsonl
    • 2
      • xxx.jsonl
    • ...
    • #nodes
      • xxx.jsonl

Each line of a jsonl file is a key-value pair of tokenized data {"token_ids": [1,2,3,4,...]}.

Our training code is compatible with the data pre-processing method in https://github.com/LLM360/amber-data-prep.

Step 3. Training script

The script pretrain.sh is provided to initiate training on a 1x8 node setup using torchrun. This script can be modified to adjust the --nnodes parameter and other settings to suit different multi-node configurations, such as those using slurm or torchx. The learning rate in the script is for 1x8 node with a batch size of 32. If you increase the number of nodes or the batch size, you need to increase the learning rate linearly.

Steps to run:

  • In pretrain.sh file, specify the --train_data_local_path to the pre-processed data in Step 2 and --input_model_filename to ./configs/{model_size}/.
  • Run bash pretrain.sh

Evaluation on Wiki

Download the models and update the checkpoint path in eval.sh

  • Run bash eval.sh

Training cost

It takes the following number of days to train MobileLLM on 1T tokens using 32 NVIDIA A100 80G GPUs.

125M350M600M1B1.5B
~3 days~6 days~8 days~12 days~18 days

Results on Zero-shot Common Sense Reasoning tasks

MobileLLM-125M

modelarc_easyarc_challengeboolqpiqasiqahellaswagobqawinograndeavg.
OPT-125M41.325.257.562.041.931.131.250.842.6
GPT-neo-125M40.724.861.362.541.929.731.650.742.9
Pythia-160M40.025.359.562.041.529.931.250.942.5
MobileLLM-125M43.927.160.265.342.438.939.553.146.3
MobileLLM-LS-125M45.828.760.465.742.939.541.152.147.0

MobileLLM-350M

modelarc_easyarc_challengeboolqpiqasiqahellaswagobqawinograndeavg.
OPT-350M41.925.754.064.842.636.233.352.443.9
Pythia-410M47.130.355.367.243.140.136.253.446.6
MobileLLM-350M53.833.562.468.644.749.640.057.651.3
MobileLLM-LS-350M54.432.562.869.844.150.645.857.252.1

MobileLLM-600M

modelarc_easyarc_challengeboolqpiqasiqahellaswagobqawinograndeavg.
Qwen1.5-500M54.732.146.968.946.048.837.755.048.8
BLOOM-560M43.727.553.765.142.536.532.652.244.2
MobiLlama-800M52.031.754.673.043.352.342.556.350.7
MobileLLM-600M58.135.861.072.344.955.947.958.654.3

MobileLLM-1B

modelarc_easyarc_challengeboolqpiqasiqahellaswagobqawinograndeavg.
Pythia-1B49.930.458.769.243.347.438.652.248.7
MobiLlama-1B59.738.459.274.544.962.043.759.055.2
Falcon-1B59.538.463.974.644.662.945.660.956.3
BLOOM-1.1B47.627.358.667.042.442.236.653.846.9
TinyLlama-1.1B59.237.158.172.943.959.144.758.854.2
MobileLLM-1B63.039.066.774.445.061.446.862.357.3

MobileLLM-1.5B

modelarc_easyarc_challengeboolqpiqasiqahellaswagobqawinograndeavg.
GPT-neo-1.3B51.333.061.870.943.748.641.254.550.6
OPT-1.3B54.431.758.471.544.753.744.659.152.3
BLOOM-1.7B50.931.261.770.043.247.236.256.149.6
Qwen1.5-1.8B61.136.568.374.147.260.442.961.256.5
GPT-neo-2.7B55.834.362.472.943.655.640.057.952.8
OPT-2.7B56.634.661.874.545.660.248.259.655.1
Pythia-2.8B59.438.966.173.844.559.645.059.455.8
BLOOM-3B55.133.662.170.543.253.941.658.252.3
MobileLLM-1.5B67.540.965.774.846.464.550.564.759.4

Acknowledgement

This code is partially based on HuggingFace Transformers repo under Apache License.

Contact

Zechun Liu, Meta Inc (zechunliu at meta dot com)

Changsheng Zhao, Meta Inc (cszhao at meta dot com)

Relevant Projects

SpinQuant: LLM Quantization with Learned Rotations (ICLR 2025) [Paper] [Code]

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models [Paper] [Code]

What's Next?

MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes (ICLR 2026)[Paper] [Code] [Models]

MobileLLM-R1.5 [Models]

License

MobileLLM is FAIR NC licensed as of now.