so any (H, W) works

April 30, 2026 ยท View on GitHub

beta-final

Preprint PDF Browser Demo YouTube Views Open In Colab PyPI Dataset License: CC BY 4.0

Embedding Sentinel-2 and Sentinel-1 with a Little Help of AlphaEarth

๐Ÿ“„ Official paper coming soon. The write-up (architecture, evaluation on 6,250 test tiles, modality attribution, multi-temporal aggregation) will be published on EarthArXiv shortly. Working draft available as a local PDF: docs/beta_earth_preprint.pdf.


What is BetaEarth?

2023_preview_pca

BetaEarth produces dense 10 m geospatial embedding fields from Sentinel-2 and Sentinel-1 imagery. It is trained to approximate the outputs of AlphaEarth Foundations (AEF) โ€” the embedding product released by Google and Google DeepMind โ€” using only AEF's public precomputed embeddings as supervision.

BetaEarth has no access to AEF's weights or architecture. It is an independent model, not a variant or extension of AEF. Emulation quality is below AEF's, but BetaEarth runs locally on any Sentinel scene and its full pipeline is open.

beta-earth

When to use BetaEarth

  • Offline generation. AEF is distributed as annual global rasters generated inside Google Earth Engine. BetaEarth runs on any S2/S1 scene locally โ€” useful for custom temporal windows or deployments without Earth Engine access.
  • Open pipeline. Training data, weights, and inference code are all open, so BetaEarth can serve as an approximate reference for studying how multimodal Earth-observation embeddings behave under missing modalities, temporal averaging, or compression.

Quickstart

pip install betaearth
from betaearth import BetaEarth

model = BetaEarth.from_pretrained()  # default: curriculum flagship (HF repo: betaearth-segformer-film-robust)

# Any modality can be omitted โ€” the curriculum model handles missing inputs.
# predict() tiles internally (224 px tile, 112 px overlap, trapezoidal blend),
# so any (H, W) works โ€” including full 1068x1068 Major TOM tiles or larger.
embedding = model.predict(
    s2_l2a=s2_l2a,   # (9, H, W) uint16 DN; bands [B02,B03,B04,B08,B05,B06,B07,B11,B12]
    s2_l1c=s2_l1c,   # (9, H, W) uint16 DN; same band order as L2A
    s1=s1,           # (2, H, W) float32 linear power (NOT dB); bands [VV, VH]
    dem=dem,         # (1, H, W) float32 elevation in metres (raw COP-DEM)
    doy=182,         # day-of-year of the S2 acquisition (1-366)
)
# embedding: (H, W, 64) float32, L2-normalised per pixel (unit vectors on S^63)

Input formats

All spatial arrays share the same (H, W) and are pixel-aligned. BetaEarth normalises internally โ€” pass the raw source values described below, no custom scaling.

InputShapeDtypeUnits / rangeBand orderTypical source
s2_l1c(9, H, W)uint16Digital numbers, 0โ€“10 000+ (top-of-atmosphere reflectance ร— 10 000). Divided by 10 000 internally.[B02, B03, B04, B08, B05, B06, B07, B11, B12]Copernicus Data Space Ecosystem, Sentinel Hub, AWS Open Data
s2_l2a(9, H, W)uint16Digital numbers, 0โ€“10 000+ (atmospherically-corrected surface reflectance ร— 10 000). Divided by 10 000 internally.same as L1CPlanetary Computer, Sentinel Hub, AWS Earth Search
s1(2, H, W)float32Linear power (typical range ~0โ€“200, not 0โ€“1). Converted to dB and rescaled internally.[VV, VH]Planetary Computer sentinel-1-rtc, ASF Radiometric Terrain Corrected
dem(1, H, W)float32Raw elevation in metres (COP-DEM GLO-30 range ~โˆ’500 to 9000). Min-max rescaled internally.โ€“Copernicus DEM GLO-30 (Planetary Computer cop-dem-glo-30)
doyscalar int1โ€“366Day-of-year of the S2 acquisition (not epoch, not ISO)โ€“โ€“

Output is (H, W, 64) float32, L2-normalised per pixel. H and W can be anything โ‰ฅ 224; predict() tiles the input with a 224ร—224 window internally and stitches.

Input gotchas

  • S2 band order matters. The 10 m bands come first, then 20 m: [B02, B03, B04, B08, B05, B06, B07, B11, B12]. Any other order silently produces garbage embeddings. If you fetch from a STAC source that returns bands in their native order (B01, B02, โ€ฆ), you must reorder before passing in.
  • L1C and L2A are NOT interchangeable. They are handled by separate encoders and represent distinct processing levels (top-of-atmosphere vs surface reflectance). The default curriculum (flagship) model handles any subset (single L1C, single L2A, both, or neither) gracefully. The peak-quality variants (betaearth-segformer-film = reinit, betaearth-segformer-film-hilr, betaearth-segformer-film-scratch) were trained with L1C + L2A jointly and drop ~32 % cos sim if only one processing level is provided.
  • Raw DN values, not reflectance. S2 normalisation happens inside the model โ€” pass the uint16 DN as-is.
  • S1 must be linear power, not dB. Planetary Computer's sentinel-1-rtc collection returns linear power by default. If you have GRD-dB data (e.g. from SNAP), convert first: linear = 10 ** (db / 10). Typical linear-power magnitudes are ~0.01โ€“200; predict() handles the dB conversion and clipping internally.
  • DEM in metres, not pre-normalised. Pass the raw elevation array (COP-DEM GLO-30 output). predict() applies per-input min-max rescaling internally. If you already have DEM rescaled to [0, 1], pass normalise=False to predict().
  • Shape convention. All spatial inputs are channel-first (C, H, W) โ€” consistent with torch conventions but opposite of common remote-sensing (H, W, C) rasters.
  • Tiling is automatic. predict() uses a 224 px tile with 112 px overlap (trapezoidal blending) by default โ€” matches the paper's eval pipeline and gives seam-free PCA-RGB previews on low-variance scenes (arid, water, snow). Override with tile_size=... / overlap=... if you want a different stitch; overlap=32 is ~3ร— faster but can show visible seams on uniform surfaces. Anything below 224 px total will fail.

Try in 30 seconds on Colab โ€” pick the notebook that matches your use case:

NotebookWhen to useInputsRuntime
โšก demo.ipynb Open In ColabFast mono-temporal quickstart. Understand the model in one minute.1 Major TOM tile (single parquet row, no STAC)~30 s on T4
๐ŸŒ generate_demo.ipynb Open In ColabFlexible multi-temporal โ€” any bounding box, annual aggregated embedding. Same pipeline as the hosted app.S2 + S1 + DEM from Planetary Computer (multi-scene)few minutes on T4

Or skip the notebooks: examples/predict.py is the minimal local script.


Generate embeddings for any area

Four entry points, from zero-install to fully scripted.

1. Hosted app (no install)

Pick a bounding box on a map, click run. Free tier is CPU-only and caps total output at 3 GB.

Open in HF Spaces

BetaEarth App

2. Colab notebooks

Two notebooks depending on how much acquisition plumbing you want:

  • โšก examples/demo.ipynb Open In Colab โ€” fast mono-temporal. One Major TOM tile, one predict(), one PCA-RGB. No STAC, no credentials. Good for understanding the model.
  • ๐ŸŒ examples/generate_demo.ipynb Open In Colab โ€” flexible multi-temporal. Pick any bbox on an interactive map; the notebook downloads Sentinel-2 L2A + Sentinel-1 RTC + COP-DEM from Planetary Computer, runs per-timestamp inference, averages into an annual 64-band GeoTIFF. Uses Colab's free T4 GPU. This is the same pipeline as the hosted Streamlit app.

3. Command-line generation (the main path for real work)

betaearth-generate ships with the package and drives the same pipeline: download Sentinel-2 L2A + Sentinel-1 RTC + COP-DEM from Planetary Computer, run tiled inference, write an annual 64-band COG plus a full provenance manifest per year.

pip install 'betaearth[generate]'

# By bounding box (W S E N), one or more years
betaearth-generate --bbox 13.1 48.7 13.8 49.2 --years 2020 2021 2022 2023 2024 2025 \
    --output_dir outputs/bavarian_forest

# By OSM relation id (resolved to its bbox)
betaearth-generate --osm_relation 1864214 --years 2024 --output_dir outputs/bav

No API keys needed โ€” Planetary Computer is publicly accessible. A CUDA GPU is used automatically if available; CPU works but is slower. Each run produces, per year:

FileDescription
{year}.tif64-band annual average embedding (L2-normalised per pixel), COG
{year}_preview_pca.png3-band PCA-RGB quick-look of the annual mosaic
{year}_manifest.jsonProvenance: model repo + version, CRS/bounds/shape, acquisition params, full STAC id list of every scene used (cloud cover, coverage, S1 orbit/polarisation, ...)
{year}_files/{date}_{sensor}/Optional per-scene outputs, only with --save_per_timestamp_embedding / --save_scenes

The manifest is deliberately verbose so any downstream user of the embedding can verify exactly which Sentinel products fed into it. Import betaearth.generate for the Python API that backs the CLI; a minimal scripted example is in examples/predict.py.

4. Streamlit app (local)

The same app as the hosted Space, run on your own compute:

git clone https://github.com/asterisk-labs/beta-earth
cd beta-earth
pip install 'betaearth[demo]'
streamlit run demo/app.py

Then open http://localhost:8501 in your browser. Raise the 3 GB cap via env var:

BETAEARTH_MAX_OUTPUT_MB=50000 streamlit run demo/app.py   # 50 GB ceiling

Models

We release 8 model variants spanning different trade-offs between quality, parameter efficiency, and input requirements.

Main results (full 6,250-tile test set)

Preliminary results from the first preprint version. Numbers match the working draft (docs/beta_earth_preprint.pdf, Table II) โ€” full test set; own-probe LULC. Subject to revision once the paper goes live on EarthArXiv and in subsequent versions as evaluation is expanded.

ModelTest Cos SimStdLULC AccModel SizeInputs
SF curriculum (flagship)0.8730.1090.833104.8MAny subset of S2/S1/DEM + DOY
SF frozen+FiLM (reinit)0.8830.1060.836104.8MS2 L1C+L2A, S1, DEM, DOY
SF frozen+FiLM (hilr)0.8830.1070.838104.8MS2 L1C+L2A, S1, DEM, DOY
SF from-scratch+FiLM0.8830.1050.835104.8MS2 L1C+L2A, S1, DEM, DOY
SF no FiLM (baseline)0.8750.1100.838104.8MS2 L1C+L2A, S1, DEM
DINOv3 ViT-L/160.8730.1090.840304M6 primitives + DOY
DINOv3 ViT-S/160.8620.1120.83624M6 primitives + DOY
SF RGB-only+FiLM0.8340.1280.82326.3MS2 RGB, DOY
Real AlphaEarth (reference)------0.856------

Single-modality performance (curriculum flagship, test set)

Preliminary โ€” working draft. Values match the preprint Table III (curriculum on the full 6,250-tile test set). See docs/beta_earth_preprint.pdf.

The curriculum model is the only variant that remains functional under severely reduced inputs:

Input subsetCosine sim
All modalities0.872
No DEM (S2+S1 only)0.854
No S1 (S2+DEM only)0.848
S2 only0.817
No time (DOY=0)0.773
S1 only0.710
DEM only0.541

For users with access to only one S2 processing level, separate validation-set measurements give L1C-only 0.806 and L2A-only 0.755 (the paper's test-set ablation groups both L1C and L2A together under "S2 only").

Which model should I use?

Use caseRecommended modelWhy
General use (default)SF curriculum (flagship)Works with any input subset; only variant that stays usable on single-modality inputs (S1-only 0.710, DEM-only 0.541)
Maximum qualitySF frozen+FiLM (reinit)Highest test cos sim (0.883) โ€” requires all 4 modalities
No timestamp neededSF no FiLM (baseline)Does not consume day-of-year input; reaches 0.875
Lightweight / edgeDINOv3 ViT-S/1624M params, 0.862 test cos sim
Minimal data requirementsSF RGB-only+FiLMOnly needs 3-band S2 RGB + DOY
Best downstream LULCDINOv3 ViT-L/160.840 own-probe LULC (closest to AEF's 0.856 ceiling)
Research / ablationSF frozen+FiLM (hilr), SF from-scratch+FiLMAlternative training strategies for comparison against the reinit variant

Architecture overview

DINOv3 models use a single shared frozen DINOv3 backbone applied to 3-band spectral primitives:

PrimitiveBandsCaptures
True-colour RGBB04/B03/B02Visual texture, built environment
False-colour IRB08/B04/B03Vegetation health (NIR)
SWIR compositeB12/B11/B04Moisture, bare soil, burn scars
Red-edgeB07/B06/B05Canopy structure, chlorophyll
SARVV/VH/ratioStructure, moisture (from S1)
TopographyElevation/Slope/AspectTerrain (from COP-DEM)

Primitives are fused via permutation-invariant cross-attention (SetFusion).

SegFormer models use 4 separate MiT-B2 encoders processing each modality's raw bands natively (9ch S2-L1C, 9ch S2-L2A, 2ch S1, 1ch DEM), with channel concatenation fusion.

All models use FiLM temporal conditioning (day-of-year modulation) except the no-FiLM baseline.

Key findings

  • Temporal conditioning as spectral compensation: FiLM importance scales inversely with spectral access โ€” RGB-only (22pp) > DINOv3 (18pp) > SegFormer scratch (14pp) > frozen SegFormer (5pp).
  • Multi-temporal averaging of 4+ observations improves emulation by up to +13pp over single timestamps, with the benefit biome-dependent (gap-fill wins in boreal regions; S2-only wins in arid/temperate).
  • Predicted embeddings retain 97% of downstream LULC classification accuracy (own-probe linear probe on IO-LULC) across all full-spectrum variants.

Model Properties

PropertyValue
OutputDense embedding field โ€” (H, W, 64) per tile at 10m resolution
Output normalisationL2-normalised per pixel (unit vectors on S^63)
QuantisationOriginal AEF: int8 on S^63; BetaEarth outputs float32
Tile size10.68 x 10.68 km (1068 x 1068 px), Major TOM grid
Training data62,489 Major TOM grid cells (49,991 train / 6,248 val / 6,250 test)
LossCosine similarity + 0.1 * MSE, masked to valid pixels

Multi-temporal averaging

Build an annual mosaic by predicting each scene separately and averaging the L2-normalised outputs โ€” saturates at ~4 observations per pixel:

import numpy as np

preds = []
for s2, s1, doy in zip(s2_timeseries, s1_timeseries, doys):
    pred = model.predict(s2_l2a=s2, s1=s1, dem=dem, doy=doy)
    preds.append(pred)

annual = np.mean(preds, axis=0)
annual /= np.linalg.norm(annual, axis=-1, keepdims=True)

(betaearth-generate and the Streamlit demo wrap this pattern with cloud masking, seasonal balancing, and a provenance manifest.)


Data Access

All training data is from the Major TOM community project and is freely available on HuggingFace:

DatasetDescription
Major-TOM/Core-S2L2ASentinel-2 L2A imagery
Major-TOM/Core-S2L1CSentinel-2 L1C imagery
Major-TOM/Core-S1RTCSentinel-1 RTC imagery
Major-TOM/Core-AlphaEarth-EmbeddingsAEF target embeddings

Data normalisation

All input data should be stored as raw values. Normalisation happens inside the model:

  • S2 L1C/L2A: uint16 DN (0-10000+), divided by 10000 internally
  • S1 RTC: linear power (float32, ~0-200), log-transformed internally
  • COP-DEM: raw elevation in metres (float32, COP-DEM GLO-30 range ~-500 to 9000), min-max rescaled internally (pass normalise=False to predict() if your DEM is already in [0, 1])

Important: S2 bands must be ordered [B02, B03, B04, B08, B05, B06, B07, B11, B12] (10 m bands first, then 20 m) โ€” the order BetaEarth was trained with.

BetaEarth on Satellite-Image-Deep-Learning

You can watch the episode about BetaEarth here: Watch the video


Citation

@inproceedings{czerkawski2026betaearth,
  title     = {BetaEarth: Emulating Closed-Source Earth Observation Models Through Their Public Embeddings},
  author    = {Czerkawski, Mikolaj},
  year      = {2026}
}

If using BetaEarth embeddings in research, also cite AlphaEarth Foundations (arXiv:2507.22291).


License and Attribution

BetaEarth model weights are released under CC-BY 4.0, matching the license of the AlphaEarth Foundations embedding archive used for training supervision.

Attribution for AEF training data:

"The AlphaEarth Foundations Satellite Embedding dataset is produced by Google and Google DeepMind."

Training imagery is sourced from Major TOM (Apache 2.0) and Copernicus Sentinel (free and open access).