RGBAvatar
April 24, 2026 · View on GitHub
Official implementation of CVPR 2025 Highlight paper "RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars".
Installation
-
Clone this repository.
git clone https://github.com/gapszju/RGBAvatar.git cd RGBAvatar -
Create conda environment.
conda create -n rgbavatar python=3.10 conda activate rgbavatar -
Install PyTorch and nvdiffrast. Please make sure that the PyTorch CUDA version matches your system's CUDA version. We use CUDA 11.8 here.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 pip install git+https://github.com/NVlabs/nvdiffrast -
Install other packages.
pip install -r requirements.txt -
Compile PyTorch CUDA extension.
pip install submodules/diff-gaussian-rasterization
Data Preprocessing
For offline reconstruction, we use FLAME template model and follow INSTA to preprocess the video sequence.
-
You need to create an account on the FLAME website and download FLAME 2020 model. Please unzip FLAME2020.zip and put
generic_model.pklunder./data/FLAME2020. -
Please follow the instructions in INSTA. You may first use Metrical Photometric Tracker to track and run
generate.shprovided by INSTA to mask the head. -
Organize the INSTA's output in the following form, and modify the
data_dirin config file to refer to the dataset path.<DATA_DIR> ├──<SUBJECT_NAME> ├── checkpoint # FLAME parameter for each frame, generated by the tracker ├── images # generated by the script of INSTA
For online reconstruction, we use FaceWareHouse template model and a real-time face tracker DDE to compute the expression coeafficients in real-time. We will release the code of this version in the future.
Running
Offline Training
python train_offline.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH --preload
Command Line Arguments for train_offline.py
--subject
Subject name for training (bala by default).
--work_name
A nick name for the experiment, training results will be saved under output/WORK_NAME.
--config
Config file path (config/offline.yaml by default).
--split
Use train/test/all split of the image sequence (train by default).
--preload
Whether to preload image data to CPU memory, which accelerate the training speed.
--log
Whether to output log information during training.
We provide 12 pretrained avatar models here.
Online Training
python train_online.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH --video_fps 25
Command Line Arguments for train_online.py
--subject
Subject name for training (bala by default).
--work_name
A nick name for the experiment, training results will be saved under output/WORK_NAME.
--config
Config file path (config/online.yaml by default).
--video_fps
FPS of the input video stream (25 by default ).
--log
Whether to output log information during training.
Evaluation
python calculate_metrics.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH
Command Line Arguments for calculate_metrics.py
--subject
Subject name for training (bala by default).
--output_dir
Path of the expeirment output folder (output by default).
--work_name
Name of the experiment to be evaluated.
--split
Frame number where split the training and test set. (-350 by default ).
Rendering
python render.py --subject SUBJECT_NAME --work_name WORK_NAME
Command Line Arguments for render.py
--subject
Subject name for training (bala by default).
--output_dir
Path of the expeirment output folder (output by default).
--work_name
Name of the experiment to be rendered.
--white_bg
Whether to use white background, back by default.
--alpha
Whether to render the alpha channel.
Real-time Demo
TBD
Training on Mulit-View Dataset (NeRSemble)
[update on 2025.08.14] We provide training and rendering scripts on NeRSemble dataset, we use the preprocessed data provided by GaussianAvatars. Please set the dataset root path in config/nersemble.yaml and put the flame2023.pkl file under data/FLAME2023 folder. The FLAME 2023 model can be downloaded from FLAME website.
python train_offline_nersemble.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH
Command Line Arguments for train_offline_nersemble.py
--subject
Subject name for training (074 by default).
--work_name
A nick name for the experiment, training results will be saved under output/WORK_NAME.
--config
Config file path (config/nersemble.yaml by default).
--log
Whether to output log information during training.
python render_nersemble.py --subject SUBJECT_NAME --work_name WORK_NAME
Command Line Arguments for render_nersemble.py
--subject
Subject name for training (074 by default).
--output_dir
Path of the expeirment output folder (output by default).
--work_name
Name of the experiment to be rendered.
--white_bg
Whether to use white background, back by default.
--alpha
Whether to render the alpha channel.
Citation
@InProceedings{Li_2025_CVPR,
author = {Li, Linzhou and Li, Yumeng and Weng, Yanlin and Zheng, Youyi and Zhou, Kun},
title = {RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {10747-10757}
}