Docker Model CLI
October 24, 2025 · View on GitHub
A powerful command-line interface for managing, running, packaging, and deploying AI/ML models using Docker. This CLI lets you install and control the Docker Model Runner, interact with models, manage model artifacts, and integrate with OpenAI and other backends—all from your terminal.
Features
- Install Model Runner: Easily set up the Docker Model Runner for local or cloud environments with GPU support.
- Run Models: Execute models with prompts or in interactive chat mode, supporting multiline input and OpenAI-style backends.
- List Models: View all models available locally or via OpenAI, with options for JSON and quiet output.
- Package Models: Convert GGUF files into Docker model OCI artifacts and push them to registries, including license and context size options.
- Configure Models: Set runtime flags and context sizes for models.
- Logs & Status: Stream logs and check the status of the Model Runner and individual models.
- Tag, Pull, Push, Remove, Unload: Full lifecycle management for model artifacts.
- Compose & Desktop Integration: Advanced orchestration and desktop support for model backends.
Building
- Clone the repo:
git clone https://github.com/docker/model-cli.git cd model-cli - Build the CLI:
make build - Install Model Runner:
Use./model-cli install-runner--gpu cudafor GPU support, or--gpu autofor automatic detection.
Usage
Run ./model-cli --help to see all commands and options.
Common Commands
model-cli install-runner— Install the Docker Model Runnermodel-cli start-runner— Start the Docker Model Runnermodel-cli stop-runner— Stop the Docker Model Runnermodel-cli restart-runner— Restart the Docker Model Runnermodel-cli run MODEL [PROMPT]— Run a model with a prompt or enter chat modemodel-cli list— List available modelsmodel-cli package --gguf <path> --push <target>— Package and push a modelmodel-cli logs— View logsmodel-cli status— Check runner statusmodel-cli configure MODEL [flags]— Configure model runtimemodel-cli unload MODEL— Unload a modelmodel-cli tag SOURCE TARGET— Tag a modelmodel-cli pull MODEL— Pull a modelmodel-cli push MODEL— Push a modelmodel-cli rm MODEL— Remove a model
Example: Interactive Chat
./model-cli run llama.cpp "What is the capital of France?"
Or enter chat mode:
./model-cli run llama.cpp
> """
Tell me a joke.
"""
Advanced
- Packaging: Add licenses and set context size when packaging models for distribution.
Development
- Run unit tests:
make unit-tests - Generate docs:
make docs