README.md

March 3, 2026 · View on GitHub

:fire: FLAME Universe :fire:

This repository presents a list of publicly available ressources such as code, datasets, and scientific papers for the :fire: FLAME :fire: 3D head model. We aim at keeping the list up to date. You are invited to add missing FLAME-based ressources (publications, code repositories, datasets) either in the discussions or in a pull request.

:fire: FLAME :fire:

Never heard of FLAME?

FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the scientific publication.

To download the FLAME model, sign up under MPI-IS/FLAME and agree to the model license. Then you can download FLAME and other FLAME-related resources such as landmark embeddings, segmentation masks, quad template mesh, etc., from MPI-IS/FLAME/download. You can also download the model with a bash script such as fetch_FLAME.

Code

List of public repositories that use FLAME (alphabetical order).
  • BFM_to_FLAME: Conversion from Basel Face Model (BFM) to FLAME.
  • CVTHead: Controllable head avatar generation from a single image.
  • DECA: Reconstruction of 3D faces with animatable facial expression detail from a single image.
  • DiffPoseTalk: Speech-driven stylistic 3D facial animation.
  • diffusion-rig: Personalized model to edit facial expressions, head pose, and lighting in portrait images.
  • EMOCA: Reconstruction of emotional 3D faces from a single image.
  • EMOTE: Emotional speech-driven 3D face animation.
  • expgan: Face image generation with expression control.
  • FaceFormer: Speech-driven facial animation of meshes in FLAME mesh topology.
  • FateAvatar: Full-head Gaussian Avatar with Textural Editing from Monocular Video.
  • FLAME-Blender-Add-on: FLAME Blender Add-on.
  • flame-fitting: Fitting of FLAME to scans.
  • flame-head-tracker: FLAMe-based monocular video tracking.
  • FLAME_PyTorch: FLAME PyTorch layer.
  • GANHead: Animatable neural head avatar.
  • GAGAvatar: Recontruction of controllable 3D head avatars from a single image.
  • GaussianAvatars: Photorealistic head avatars with FLAME-rigged 3D Gaussians.
  • GPAvatar: Prediction of controllable 3D head avatars from one or several images.
  • GIF: Generating face images with FLAME parameter control.
  • INSTA: Volumetric head avatars from videos in less than 10 minutes.
  • INSTA-pytorch: Volumetric head avatars from videos in less than 10 minutes (PyTorch).
  • learning2listen: Modeling interactional communication in dyadic conversations.
  • LightAvatar-TensorFlow: Use of neural light field (NeLF) to build photorealistic 3D head avatars.
  • MICA: Reconstruction of metrically accurated 3D faces from a single image.
  • MeGA: Reconstruction of an editable hybrid mesh-Gaussian head avatar.
  • metrical-tracker: Metrical face tracker for monocular videos.
  • MultiTalk: Speech-driven facial animation of meshes in FLAME mesh topology.
  • NED: Facial expression of emotion manipulation in videos.
  • Next3D: 3D generative model with FLAME parameter control.
  • NeuralHaircut: Creation of strand-based hairstyle from single-view or multi-view videos.
  • neural-head-avatars: Building a neural head avatar from video sequences.
  • NeRSemble: Building a neural head avatar from multi-view video data.
  • photometric_optimization: Fitting of FLAME to images using differentiable rendering.-
  • RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars.
  • RingNet: Reconstruction of 3D faces from a single image.
  • ROME: Creation of personalized avatar from a single image.
  • SAFA: Animation of face images.
  • Semantify: Semantic control over 3DMM parameters.
  • SPARK: Personalized Real-time Monocular Face Capture
  • SPECTRE: Speech-aware 3D face reconstruction from images.
  • SplattingAvatar: Real-time human avatars with mesh-embedded Gaussian splatting.
  • TalkingStyle: Personalized spech-driven 3D facial animation of meshes in FLAME mesh topology.
  • SMIRK: Reconstruction of emotional 3D faces from a single image.
  • TRUST: Racially unbiased skin tone extimation from images.
  • TF_FLAME: Fit FLAME to 2D/3D landmarks, FLAME meshes, or sample textured meshes.
  • video-head-tracker: Track 3D heads in video sequences.
  • VOCA: Speech-driven facial animation of meshes in FLAME mesh topology.
  • VHAP: 3D face tracker for single-view and multi-view videos.

Datasets

List of datasets with meshes in FLAME topology.
  • BP4D+: 127 subjects, one neutral expression mesh each.
  • CoMA dataset: 12 subjects, 12 extreme dynamic expressions each.
  • D3DFACS: 10 subjects, 519 dynamic expressions in total.
  • Decaf dataset: Deformation capture for face and hand interactions.
  • FaceWarehouse: 150 subjects, one neutral expression mesh each.
  • FaMoS: 95 subjects, 28 dynamic expressions and head poses each, about 600K frames in total.
  • Florence 2D/3D: 53 subjects, one neutral expression mesh each.
  • FRGC: 531 subjects, one neutral expression mesh each.
  • LYHM: 1216 subjects, one neutral expression mesh each.
  • MEAD reconstructions: 3D face reconstructions for MEAD (emotional talking-face dataset).
  • MEAD-3D: FLAME meshes and model parameters for MEAD.
  • NeRSemble dataset: 10 sequences of multi-view images and 3D faces in FLAME mesh topology.
  • RenderMe-360: Digital asset library for high-fidelity head avatars with labeled FLAME parameters.
  • SingingHead: 27 hours of synchronized singing video, audio, 3D facial motion, and background music from 76 subjects.
  • Stirling: 133 subjects, one neutral expression mesh each.
  • VOCASET: 12 subjects, 40 speech sequences each with synchronized audio.

Publications

List of FLAME-based scientific publications.

2026

2025

2024

2023

2022

2021

2020

2019