LiteRT-LM

May 5, 2026 ยท View on GitHub

LiteRT-LM is Google's production-ready, high-performance, open-source inference framework for deploying Large Language Models on edge devices.

๐Ÿ”— Product Website

๐Ÿ”ฅ What's New: v0.11.0

  • Gemma 4 Multi-token Prediction (MTP) Support: Supercharge Gemma 4 on-device inference with Single Position Multi Token Prediction (MTP), delivering >2x faster decode speeds on mobile GPUs with zero quality degradation (blog, documentation).

  • Windows Native Support: The LiteRT-LM CLI now runs natively on Windows with both CPU and GPU backend support.

๐Ÿ‘‰ Try Gemma4-E4B with MTP on Linux, macOS, Windows or Raspberry Pi with the LiteRT-LM CLI:

litert-lm run  \
   --from-huggingface-repo=litert-community/gemma-4-E4B-it-litert-lm \
   gemma-4-E4B-it.litertlm \
   --backend=gpu \
   --enable-speculative-decoding=true \
   --prompt="What is the capital of France?"

๐ŸŒŸ Key Features

  • ๐Ÿ“ฑ Cross-Platform Support: Android, iOS, Web, Desktop, and IoT (e.g. Raspberry Pi).
  • ๐Ÿš€ Hardware Acceleration: Peak performance via GPU and NPU accelerators.
  • ๐Ÿ‘๏ธ Multi-Modality: Support for vision and audio inputs.
  • ๐Ÿ”ง Tool Use: Function calling support for agentic workflows.
  • ๐Ÿ“š Broad Model Support: Gemma, Llama, Phi-4, Qwen, and more.


๐Ÿš€ Production-Ready for Google's Products

LiteRT-LM powers on-device GenAI experiences in Chrome, Chromebook Plus, Pixel Watch, and more.

You can also try the Google AI Edge Gallery app to run models immediately on your device.

Install the app today from Google PlayInstall the app today from App Store
Get it on Google Play Download on the App Store

๐Ÿ“ฐ Blogs & Announcements

LinkDescription
Bring state-of-the-art agentic skills to the edge with Gemma 4Deploy Gemma 4 in-app and across a broader range of devices with stellar performance and broad reach using LiteRT-LM.
On-device GenAI in Chrome, Chromebook Plus and Pixel WatchDeploy language models on wearables and browser-based platforms using LiteRT-LM at scale.
On-device Function Calling in Google AI Edge GalleryExplore how to fine-tune FunctionGemma and enable function calling capabilities powered by LiteRT-LM Tool Use APIs.
Google AI Edge small language models, multimodality, and function callingLatest insights on RAG, multimodality, and function calling for edge language models.

๐Ÿƒ Quick Start

โšก Quick Try (No Code)

Try LiteRT-LM immediately from your terminal without writing a single line of code using uv:

uv tool install litert-lm

litert-lm run \
  --from-huggingface-repo=google/gemma-3n-E2B-it-litert-lm \
  gemma-3n-E2B-it-int4 \
  --prompt="What is the capital of France?"

๐Ÿ“š Supported Language APIs

Ready to get started? Explore our language-specific guides and setup instructions.

LanguageStatusBest For...Documentation
Kotlinโœ… StableAndroid apps & JVMAndroid (Kotlin) Guide
Pythonโœ… StablePrototyping & ScriptingPython Guide
C++โœ… StableHigh-performance nativeC++ Guide
Swift๐Ÿš€ In DevNative iOS & macOS(Coming Soon)

๐Ÿ—๏ธ Build From Source

This guide shows how you can compile LiteRT-LM from source. If you want to build the program from source, you should checkout the stable Latest
Release tag.


๐Ÿ“ฆ Releases

  • v0.11.0: Support Single Position Multi-token Prediction (MTP) for Gemma 4. Expand LiteRT-LM CLI to run natively on Windows with CPU and GPU backends.
  • v0.10.1: Deploy Gemma 4 with stellar performance (blog) and introduce LiteRT-LM CLI.
  • v0.9.0: Improvements to function calling capabilities, better app performance stability.
  • v0.8.0: Desktop GPU support and Multi-Modality.
  • v0.7.0: NPU acceleration for Gemma models.

For a full list of releases, see GitHub Releases.