JAX Toolbox

May 8, 2026 · View on GitHub

License Apache 2.0 Build

JAX Toolbox provides a public CI, Docker images for popular JAX libraries, and optimized JAX examples to simplify and enhance your JAX development experience on NVIDIA GPUs. It supports JAX libraries such as MaxText and Pallas.

Frameworks and Supported Models

We support and test the following JAX frameworks and model architectures. More details about each model and available containers can be found in their respective READMEs.

FrameworkModelsUse casesContainer
maxtextGPT, LLaMA, Gemma, Mistral, Mixtralpre-trainingghcr.io/nvidia/jax:maxtext
axlearnFujipre-trainingghcr.io/nvidia/jax:axlearn
alphafold3EvoFormerinferenceghcr.io/nvidia/jax:alphafold

Build Pipeline Status

Components Container Build Test
ghcr.io/nvidia/jax:base
[no tests]
ghcr.io/nvidia/jax:jax








ghcr.io/nvidia/jax:equinox
[tests disabled]
ghcr.io/nvidia/jax:maxtext


ghcr.io/nvidia/jax:axlearn
ghcr.io/nvidia/jax:alphafold
[no tests]

In all cases, ghcr.io/nvidia/jax:XXX points to latest nightly build of the container for XXX. For a stable reference, use ghcr.io/nvidia/jax:XXX-YYYY-MM-DD.

In addition to the public CI, we also run internal CI nightlies on GB300, B300, GB200, B200, DGX Spark, RTX PRO 6000 Blackwell, Jetson AGX Thor, H100 SXM 80GB, A100 SXM 80GB.

Environment Variables

The JAX image is embedded with the following flags and environment variables for performance tuning of XLA and NCCL:

XLA FlagsValueExplanation
--xla_gpu_enable_latency_hiding_schedulertrueallows XLA to move communication collectives to increase overlap with compute kernels

There are various other XLA flags users can set to improve performance. XLA flags can also be tuned per workload based on specific performance needs.

Versions

First nightly with new base containerBase container
2026-05-01nvcr.io/nvidia/cuda-dl-base:26.04-cuda13.2-devel-ubuntu24.04
2026-03-12nvcr.io/nvidia/cuda-dl-base:26.02-cuda13.1-devel-ubuntu24.04
2025-12-17nvcr.io/nvidia/cuda-dl-base:25.11-cuda13.0-devel-ubuntu24.04
2025-10-02nvcr.io/nvidia/cuda-dl-base:25.09-cuda13.0-devel-ubuntu24.04
2025-08-22nvcr.io/nvidia/cuda-dl-base:25.08-cuda13.0-devel-ubuntu24.04
2025-07-03nvcr.io/nvidia/cuda-dl-base:25.06-cuda12.9-devel-ubuntu24.04
2025-04-11nvcr.io/nvidia/cuda-dl-base:25.03-cuda12.8-devel-ubuntu24.04
2025-03-04nvcr.io/nvidia/cuda-dl-base:25.02-cuda12.8-devel-ubuntu24.04
2025-01-31nvcr.io/nvidia/cuda-dl-base:25.01-cuda12.8-devel-ubuntu24.04
2025-01-28nvcr.io/nvidia/cuda-dl-base:24.11-cuda12.6-devel-ubuntu24.04
2024-12-07nvidia/cuda:12.6.3-devel-ubuntu22.04
2024-11-06nvidia/cuda:12.6.2-devel-ubuntu22.04
2024-09-25nvidia/cuda:12.6.1-devel-ubuntu22.04
2024-07-24nvidia/cuda:12.5.0-devel-ubuntu22.04

Profiling

See this page for more information about how to profile JAX programs on GPU.

NVIDIA Staging Containers

JAX-Toolbox staging container hosts pending NVIDIA-authored XLA and JAX enhancements for NVIDIA GPU. These are pending PRs that are awaiting upstream review and merge in OSS OpenXLA and JAX repositories. For this initiative, we are publishing scale-training tagged containers - jax-scale-training.

Pipeline Container Schedule
scale-training.yaml ghcr.io/nvidia/jax:jax-scale-training Every other Saturday, 00:00 UTC

The dedicated scale-training workflow runs every Saturday at 00:00 UTC and publishes containers every other Saturday, starting on April 18, 2026. Refer to this page STAGING.md for more information on the underlying XLA staging branch, the pending PRs included. The page also lists the corresponding JAX commit used.

Staging releases:

Release dateContainerXLA branch (includes pending PRs)
2026-04-24ghcr.io/nvidia/jax:jax-scale-training-2026-04-245dfe2147
2026-04-18ghcr.io/nvidia/jax:jax-scale-training-2026-04-188147118

The underlying CUDA container, used for building -scale-training containers, can be seen from the corresponding dated version in the Versions tab.

To check the versions of libraries in the container, you can simply run:

docker run --rm --gpus all ghcr.io/nvidia/jax:jax-scale-training-2026-04-24 -c '
echo "=== CUDA Toolkit ===" && echo ${CUDA_VERSION}
echo "=== cuDNN ===" && echo ${CUDNN_VERSION}
echo "=== NCCL ===" &&  echo ${NCCL_VERSION}
echo "=== Python Packages ===" && pip list | grep -iE "jax|flax|equinox|optax|chex|orbax|numpy|scipy|nvidia-"
'

Frequently asked questions (FAQ)

`bus error` when running JAX in a docker container

Solution:

docker run -it --shm-size=1g ...

Explanation: The bus error might occur due to the size limitation of /dev/shm. You can address this by increasing the shared memory size using the --shm-size option when launching your container.

enroot/pyxis reports error code 404 when importing multi-arch images

Problem description:

slurmstepd: error: pyxis:     [INFO] Authentication succeeded
slurmstepd: error: pyxis:     [INFO] Fetching image manifest list
slurmstepd: error: pyxis:     [INFO] Fetching image manifest
slurmstepd: error: pyxis:     [ERROR] URL https://ghcr.io/v2/nvidia/jax/manifests/<TAG> returned error code: 404 Not Found

Solution: Upgrade enroot or apply a single-file patch as mentioned in the enroot v3.4.0 release note.

Explanation: Docker has traditionally used Docker Schema V2.2 for multi-arch manifest lists but has switched to using the Open Container Initiative (OCI) format since 20.10. Enroot added support for OCI format in version 3.4.0.

JAX on Public Clouds

Resources

Videos