OpenGrpahAU

August 21, 2025 ยท View on GitHub

We released AU-Canvas, a visualization tool that offers an intuitive UI for facial action unit (FAU) detection and enhanced visualization.

Example running on a RTX 3090 GPU (Avg. FPS>50):

This repo is the OpenGprahAU tool.

demo:

Models were traiend on hybrid dataset of 2,000k images.

This hybrid dataset includes:

The tool can predict action units of 41 categories:

AU1AU2AU4AU5AU6AU7AU9AU10AU11AU12AU13AU14AU15AU16
Inner brow raiserOuter brow raiserBrow lowererUpper lid raiserCheek raiserLid tightenerNose wrinklerUpper lip raiserNasolabial deepenerLip corner pullerSharp lip pullerDimplerLip corner depressorLower lip depressor
AU17AU18AU19AU20AU22AU23AU24AU25AU26AU27AU32AU38AU39-
Chin raiserLip puckerTongue showLip stretcherLip funnelerLip tightenerLip pressorLips partJaw dropMouth stretchLip biteNostril dilatorNostril compressor-
AUL1AUL1AUL2AUR2AUL4AUR4AUL6AUR6AUL10AUR10AUL12AUR12AUL14AUR14
Left inner brow raiserRight inner brow raiserLeft outer brow raiserRight outer brow raiserLeft brow lowererRight brow lowererLeft cheek raiserRight cheek raiserLeft upper lip raiserRight upper lip raiserLeft nasolabial deepenerRight nasolabial deepenerLeft dimplerRight dimpler

We provide tools for prepareing data in tool/. After Downloading raw data files, you can use these tools to process them, aligning with our protocals. We divide the dataset into three independent parts (i.e., train, val, test).

Pretrained models

Hybrid Dataset

Stage1:

arch_typeGoogleDrive linkAverage F1-scoreAverage Acc.
Ours (MobileNetV3)---
Ours (ResNet-18)link22.3392.97
Ours (ResNet-50)link22.5292.63
Ours (Swin-Tiny)link22.6692.97
Ours (Swin-Small)link24.4992.84
Ours (Swin-Base)link23.5392.91

Stage2:

arch_typeGoogleDrive linkAverage F1-scoreAverage Acc.
Ours (MobileNetV3)---
Ours (ResNet-18)link22.5193.23
Ours (ResNet-50)link23.2493.31
Ours (Swin-Tiny)link22.7493.37
Ours (Swin-Small)---
Ours (Swin-Base)---

Demo

  • to detect facial action units in a facial image using our stage1 model, run:
python demo.py --arc resnet50 --stage 1 --exp-name demo --resume checkpoints/OpenGprahAU-ResNet50_first_stage.pth --input demo_imgs/1014.jpg  --draw_text
  • to detect facial action units in a facial image using our stage2 model, run:
python demo.py --arc resnet50 --stage 2 --exp-name demo --resume checkpoints/OpenGprahAU-ResNet50_second_stage.pth --input demo_imgs/1014.jpg  --draw_text

Training and Testing

  • to train the first stage of our approach (ResNet-50) on hybrid Dataset, run:
python train_stage1.py --arc resnet50 --exp-name OpenGprahAU-ResNet50_first_stage -b 512 -lr 0.00002  
  • to test the first stage of our approach (SwinT) on hybrid Dataset, run:
python test_stage1.py --arc swin_transformer_tiny --exp-name test_OpenGprahAU-SwinT_first_stage  --resume ./results/OpenGprahAU-SwinT_first_stage/bs_64_seed_0_lr_2e-05/best_model.pth
  • to train the second stage of our approach (ResNet-50) on hybrid Dataset, run:
python train_stage2.py --arc resnet50 --exp-name OpenGprahAU-ResNet50_second_stage -b 512 -lr 0.00001  --resume checkpoints/OpenGprahAU-ResNet50_first_stage.pth
  • to test the second stage of our approach (SwinT) on hybrid Dataset, run:
python test_stage2.py --arc swin_transformer_tiny --exp-name test_OpenGprahAU-SwinT_second_stage  --resume ./results/OpenGprahAU-SwinT_second_stage/bs_64_seed_0_lr_1e-05/best_model.pth

๐Ÿ–Š๏ธ Citation

If you find this work useful in your research, please cite:

@inproceedings{luo2022learning,
  title     = {Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition},
  author    = {Luo, Cheng and Song, Siyang and Xie, Weicheng and Shen, Linlin and Gunes, Hatice},
  booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}},
  pages     = {1239--1246},
  year      = {2022}
}