README.md

February 28, 2024 · View on GitHub

Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

ByteDance
Equal contribution   *Corresponding author
:star_struck: Accepted to ICCV 2023

AvatarJLM uses tracking signals of the head and hands to estimate accurate, smooth, and plausible full-body motions.

:open_book: For more visual results, go checkout our project page


[Project Page][arXiv]

:mega: Updates

[09/2023] Testing samples are available.

[09/2023] Training and testing codes are released.

[07/2023] AvatarJLM is accepted to ICCV 2023 :partying_face:!

:file_folder: Data Preparation

AMASS

  1. Please download the datasets from AMASS.
  2. Download the required body models and placed them in ./support_data/body_models directory of this repository. For the SMPL+H body model, download it from http://mano.is.tue.mpg.de/. Please download the AMASS version of the model with DMPL blendshapes. You can obtain dynamic shape blendshapes, e.g. DMPLs, from http://smpl.is.tue.mpg.de.
  3. Run ./data/prepare_data.py to preprocess the input data for faster training. The data split for training and testing data under Protocol 1 in our paper is stored under the folder ./data/data_split (from AvatarPoser).
python ./data/prepare_data.py --protocol [1, 2, 3] --root [path to AMASS]

Real-Captured Data

  1. Please download our real-captured testing data from Google Drive. The data is preprocessed to the same format as our preprocessed AMASS data.
  2. Unzip the data and place it in ./data directory of this repository.

:desktop_computer: Requirements

:bicyclist: Training

python train.py --protocol [1, 2, 3] --task [name of the experiment] 

:running_woman: Evaluation

python test.py --protocol [1, 2, 3, real] --task [name of the experiment] --checkpoint [path to trained checkpoint] [--vis]

:lollipop: Trained Model

ProtocolMPJREMPJPEMPJVETrained Model
13.013.3521.01Google Drive
2-CMU-Test5.367.2826.46Google Drive
2-BML-Test4.656.2234.45Google Drive
2-MPI-Test5.856.4724.13Google Drive
34.254.9227.04Google Drive

:love_you_gesture: Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
  zheng2023realistic,
  title={Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling},
  author={Zheng, Xiaozheng and Zhuo Su and Wen, Chao and Xue, Zhou and Xiaojie Jin},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  year={2023}
}

:newspaper_roll: License

Distributed under the MIT License. See LICENSE for more information.

:raised_hands: Acknowledgements

This project is built on source codes shared by AvatarPoser. We thank the authors for their great job!