New Task-agnostic Unified Face Alignment (TUFA) is released at Here.
June 30, 2025 · View on GitHub
Dynamic Sparse Local Patch Transformer
PyTorch training code and pretrained models for DSLPT (Dynamic Sparse Local Patch Transformer).
Installation
Note: this released version was tested on Python3.8 and Pytorch 1.10.2.
Install system requirements:
sudo apt-get install python3-dev python3-pip python3-tk libglib2.0-0
Install python dependencies:
pip3 install -r requirements.txt
Run training code on WFLW dataset
-
Download and process WFLW dataset
- Download WFLW dataset and annotation from Here.
- Unzip WFLW dataset and annotations and move files into
./Datadirectory. Your directory should look like this:DSLPT └───Data │ └───WFLW │ └───WFLW_annotations │ └───list_98pt_rect_attr_train_test │ │ │ └───list_98pt_test └───WFLW_images └───0--Parade │ └───...
-
Download pretrained weight of HRNetW18C
- Download pretrained weight of HRNetW18C from Here.
- Move files into
./Configdirectory. Your directory should look like this:DSLPT └───Config │ └───hrnetv2_w18_imagenet_pretrained.pth
-
python ./train.py.
Run Evaluation on WFLW dataset
-
Download and process WFLW dataset
- Download WFLW dataset and annotation from Here.
- Unzip WFLW dataset and annotations and move files into
./datasetdirectory. Your directory should look like this:DSLPT └───Dataset │ └───WFLW │ └───WFLW_annotations │ └───list_98pt_rect_attr_train_test │ │ │ └───list_98pt_test └───WFLW_images └───0--Parade │ └───...
-
Download pretrained model from Google Drive.
- WFLW
Model Name NME FR0.1 AUC0.1 download link 1 DSLPT-6-layers 4.01 2.52 0.607 download 2 DSLPT-12-layers 3.98 2.44 0.609 download Put the model in
./weightsdirectory. -
Test
python validate.py --checkpoint=<model_name> For example: python validate.py --checkpoint=DSLPT_WFLW_6_layers.pthNote: if you want to use the model with 12 layers, you need to change
_C.TRANSFORMER.NUM_DECODERfor 6 to 12 in./Config/default.py.
##Citation If you find this work or code is helpful in your research, please cite:
@ARTICLE{DSLPT,
title={Robust Face Alignment via Inherent Relation Learning and Uncertainty Estimation},
author={Jiahao Xia and Min Xu and Haimin Zhang and Jianguo Zhang and Wenjian Huang and Hu Cao and Shiping Wen},
booktitle={TPAMI},
year={2023}
}
@inproceedings{SLPT,
title={Sparse Local Patch Transformer for Robust Face Alignment and Landmarks},
author={Jiahao Xia and Weiwei Qu and Wenjian Huang and Jianguo Zhang and Xi Wang and Min Xu},
booktitle={CVPR},
year={2022}
}
License
DSLPT is released under the GPL-2.0 license. Please see the LICENSE file for more information.