Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation
October 5, 2018 ยท View on GitHub
This repository contains the code and pretrained models of
Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation [PDF]
Xuecheng Nie, Jiashi Feng, and Shuicheng Yan
European Conference on Computer Vision (ECCV), 2018
Prerequisites
- Python 3.5
- Pytorch 0.2.0
- OpenCV 3.0 or higher
Installation
- Install Pytorch: Please follow the official instruction on installation of Pytorch.
- Clone the repository
git clone --recursive https://github.com/NieXC/pytorch-mula.git - Download Look into Person (LIP) dataset and create symbolic links to the following directories
ln -s PATH_TO_LIP_TRAIN_IMAGES_DIR dataset/lip/train_images ln -s PATH_TO_LIP_VAL_IMAGES_DIR dataset/lip/val_images ln -s PATH_TO_LIP_TEST_IMAGES_DIR dataset/lip/testing_images ln -s PATH_TO_LIP_TRAIN_SEGMENTATION_ANNO_DIR dataset/lip/train_segmentations ln -s PATH_TO_LIP_VAL_SEGMENTATION_ANNO_DIR dataset/lip/val_segmentations
Usage
Training
Run the following command to train the model from scratch (Default: 5-stage Hourglass based network):
sh run_train.sh
or
CUDA_VISIBLE_DEVICES=0,1 python main.py -b 24 --lr 0.0015
A simple way to record the training log by adding the following command:
2>&1 | tee exps/logs/mula_lip.log
Some configurable parameters in training phase:
--archnetwork architecture (HG (Hourglass) or VGG)-bmini-batch size--lrinitial learning rate (0.0015 for HG based model and 0.0001 for VGG based model)--epochstotal number of epochs for training--snapshot-fname-prefixprefix of file name for snapshot, e.g. if set '--snapshot-fname-prefix exps/snapshots/mula_lip', then 'mula_lip.pth.tar' (latest model), 'mula_lip_pose_best.pth.tar' (model with best validation PCK for human pose estimation) and 'mula_lip_parsing_best.pth.tar' (model with best validation mIoU for human parsing) will be generated in the folder 'exps/snapshots'--resumepath to the model for recovering training-jnumber of workers for loading data--print-freqprint frequency
Testing
Run the following command to evaluate the model on LIP validation set:
sh run_test.sh
or
CUDA_VISIBLE_DEVICES=0 python main.py --evaluate True --calc-pck True --calc-miou True --resume exps/snapshots/mula_lip.pth.tar
Run the following command to evaluate the model on LIP testing set:
CUDA_VISIBLE_DEVICES=0 python main.py --evaluate True --resume exps/snapshots/mula_lip.pth.tar --eval-data dataset/lip/testing_images --eval-anno dataset/lip/jsons/LIP_SP_TEST_annotations.json
In particular, human pose estimation results will be saved as a .csv file followed the official evaluation format of LIP dataset for single-person human pose estimation. An example is provided in exps/preds/pose_results/pred_keypoints_lip.csv. Human parsing results will be saved as a set of .png images in the folder exps/preds/parsing_results, representing body part label maps for testing images.
Some configurable parameters in testing phase:
--evaluateTrue for testing and false for training--resumepath to the model for evaluation--calc-pckcalculate PCK or not for validation of human pose estimation--calc-mioucalculate mIoU or not for validation of human parsing--pose-pred-pathpath to the csv file for saving the human pose estimation results--parsing-pred-dirdirectory to the png images for saving the human parsing results--visualizationvisualize evaluation or not--vis-dirdirectory for saving the visualization results
The models generated with this code can be downloaded here: GoogleDrive. Training logs can also be found at the same folder for the reference.
Citation
If you use our code in your work or find it is helpful, please cite the paper:
@inproceedings{nie2018mula,
title={Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation},
author={Nie, Xuecheng and Feng, Jiashi and Yan, Shuicheng},
booktitle={ECCV},
year={2018}
}