Scripts for BEAT Dataset
March 30, 2022 ยท View on GitHub
Contents
- train and inference scripts
- CaMN (ours)
- End2End (ours)
- Motion AutoEncoder (for evaluation)
- data preprocessing
- load specific number of joints with predefined FPS from bvh
- build word2vec model
- cache generation (.lmdb)
- dataset examples in beat.zip
- original files to generate cache in train/val/test
- cache for language_model, pretrained_vae
Train
python == 3.7- build folders like:
- codes
- datasets
- outputs
- download the scripts to
codes/beat/ - extract beat.zip to datasets/beat
- run
pip install -r requirements.txtin the path./codes/beat/ - run
python train.py -c ./configs/camn.yamlfor training and inference. - load
./outputs/exp_name/119/res_000_008.bvhinto blender to visualize the test results.
Modifiaction
- train End2End model, add
g_name: PoseGeneratorin camn.yaml - generate data cache from stratch
cd ./dataloaders && python bvh2anyjoints.pyfor motion datacd ./dataloaders && python build_vocab.pyfor language model
- remove modalities, e.g., remove facial expressions.
- set
facial_rep: Noneandfacial_f: 0in camn.yaml python train.py -c ./configs/camn.yaml- for semantic-weighted loss, set
sem_weighted == Falsein camn_trainer.py
- set
- refer to
./utils/config.pyfor other parameters.