๐Ÿ“ข News

August 21, 2025 ยท View on GitHub

PWC PWC

๐Ÿ“ข News

  • 20/08/2025 โ€” We released AU-Canvas, a visualization tool that offers an intuitive UI for facial action unit (FAU) detection and enhanced visualization.

Example running on a RTX 3090 GPU (Avg. FPS>50):

  • 12/11/2022 We released an OpenGraphAU or OpenGraphAU version of our code and models trained on a large-scale hybrid dataset of over 2,000k images and 41 action unit categories.

Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition

This is an official release of the paper

"Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition", IJCAI-ECAI 2022

[Paper] [Project]

The main novelty of the proposed approach in comparison to pre-defined AU graphs and deep learned facial display-specific graphs are illustrated in this figure.

https://user-images.githubusercontent.com/35754447/169745317-40f76ec9-4bfd-4206-8f1e-4ab4a9bf464d.mp4

๐Ÿ”ง Requirements

  • Python 3

  • PyTorch

  • Check the required python packages in requirements.txt.

pip install -r requirements.txt

Data and Data Prepareing Tools

The Datasets we used:

We provide tools for prepareing data in tool/. After Downloading raw data files, you can use these tools to process them, aligning with our protocals. More details have been described in tool/README.md.

Training with ImageNet pre-trained models

Make sure that you download the ImageNet pre-trained models to checkpoints/ (or you alter the checkpoint path setting in models/resnet.py or models/swin_transformer.py)

The download links of pre-trained models are in checkpoints/checkpoints.txt

Thanks to the offical Pytorch and Swin Transformer

Training and Testing

  • to train the first stage of our approach (ResNet-50) on BP4D Dataset, run:
python train_stage1.py --dataset BP4D --arc resnet50 --exp-name resnet50_first_stage -b 64 -lr 0.0001 --fold 1 
  • to train the second stage of our approach (ResNet-50) on BP4D Dataset, run:
python train_stage2.py --dataset BP4D --arc resnet50 --exp-name resnet50_second_stage  --resume results/resnet50_first_stage/bs_64_seed_0_lr_0.0001/xxxx_fold1.pth --fold 1 --lam 0.05
  • to train the first stage of our approach (Swin-B) on DISFA Dataset, run:
python train_stage1.py --dataset DISFA --arc swin_transformer_base --exp-name swin_transformer_base_first_stage -b 64 -lr 0.0001 --fold 2
  • to train the second stage of our approach (Swin-B) on DISFA Dataset, run:
python train_stage2.py --dataset DISFA --arc swin_transformer_base --exp-name swin_transformer_base_second_stage  --resume results/swin_transformer_base_first_stage/bs_64_seed_0_lr_0.0001/xxxx_fold2.pth -b 64 -lr 0.000001 --fold 2 --lam 0.01 
  • to test the performance on DISFA Dataset, run:
python test.py --dataset DISFA --arc swin_transformer_base --exp-name test_fold2  --resume results/swin_transformer_base_second_stage/bs_64_seed_0_lr_0.000001/xxxx_fold2.pth --fold 2

Pretrained models

BP4D

arch_typeGoogleDrive linkAverage F1-score
Ours (ResNet-18)--
Ours (ResNet-50)link64.7
Ours (ResNet-101)link64.8
Ours (Swin-Tiny)link65.6
Ours (Swin-Small)link65.1
Ours (Swin-Base)link65.5

DISFA

arch_typeGoogleDrive linkAverage F1-score
Ours (ResNet-18)--
Ours (ResNet-50)link63.1
Ours (ResNet-101)--
Ours (Swin-Tiny)--
Ours (Swin-Small)--
Ours (Swin-Base)link62.4

Download these files (e.g. ME-GraphAU_swin_base_BP4D.zip) and unzip them, each of which involves the checkpoints of three folds.

๐Ÿ“ Main Results

BP4D

MethodAU1AU2AU4AU6AU7AU10AU12AU14AU15AU17AU23AU24Avg.
EAC-Net39.035.248.676.172.981.986.258.837.559.135.935.855.9
JAA-Net47.244.054.977.574.684.086.961.943.660.342.741.960.0
LP-Net43.438.054.277.176.783.887.263.345.360.548.154.261.0
ARL45.839.855.175.777.282.386.658.847.662.147.455.461.1
SEV-Net58.250.458.381.973.987.887.561.652.662.244.647.663.9
FAUDT51.749.361.077.879.582.986.367.651.963.043.756.364.2
SRERL46.945.355.677.178.483.587.663.952.263.947.153.362.9
UGN-B54.246.456.876.276.782.486.164.751.263.148.553.663.3
HMP-PS53.146.156.076.576.982.186.464.851.563.049.954.563.4
Ours (ResNet-50)53.746.959.078.580.084.487.867.352.563.250.652.464.7
Ours (Swin-B)52.744.360.979.980.185.389.269.455.464.449.855.165.5

DISFA

MethodAU1AU2AU4AU6AU9AU12AU25AU26Avg.
EAC-Net41.526.466.450.780.589.388.915.648.5
JAA-Net43.746.256.041.444.769.688.358.456.0
LP-Net29.924.772.746.849.672.993.865.056.9
ARL43.942.163.641.840.076.295.266.858.7
SEV-Net55.353.161.553.638.271.695.741.558.8
FAUDT46.148.672.856.750.072.190.855.461.5
SRERL45.747.859.647.145.673.584.343.655.9
UGN-B43.348.163.449.548.272.990.859.060.0
HMP-PS38.045.965.250.950.876.093.367.661.0
Ours (ResNet-50)54.647.172.954.055.776.791.153.063.1
Ours (Swin-B)52.545.776.151.846.576.192.957.662.4

๐ŸŽ“ Citation

if the code or method help you in the research, please cite the following paper:


@inproceedings{luo2022learning,
  title     = {Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition},
  author    = {Luo, Cheng and Song, Siyang and Xie, Weicheng and Shen, Linlin and Gunes, Hatice},
  booktitle = {Proceedings of the Thirty-First International Joint Conference on
               Artificial Intelligence, {IJCAI-22}},
  pages     = {1239--1246},
  year      = {2022}
}


@article{song2022gratis,
    title={Gratis: Deep learning graph representation with task-specific topology and multi-dimensional edge features},
    author={Song, Siyang and Song, Yuxin and Luo, Cheng and Song, Zhiyuan and Kuzucu, Selim and Jia, Xi and Guo, Zhijiang and Xie, Weicheng and Shen, Linlin and Gunes, Hatice},
    journal={arXiv preprint arXiv:2211.12482},
    year={2022}
}