OneFormer: One Transformer to Rule Universal Image Segmentation

May 24, 2023 · View on GitHub

Framework: PyTorch Open In Colab HuggingFace space HuggingFace transformers YouTube License

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi

Equal Contribution

[Project Page] [arXiv] [pdf] [BibTeX]

This repo contains the code for our paper OneFormer: One Transformer to Rule Universal Image Segmentation.

Features

  • OneFormer is the first multi-task universal image segmentation framework based on transformers.
  • OneFormer needs to be trained only once with a single universal architecture, a single model, and on a single dataset , to outperform existing frameworks across semantic, instance, and panoptic segmentation tasks.
  • OneFormer uses a task-conditioned joint training strategy, uniformly sampling different ground truth domains (semantic instance, or panoptic) by deriving all labels from panoptic annotations to train its multi-task model.
  • OneFormer uses a task token to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model.

OneFormer

Contents

  1. News
  2. Installation Instructions
  3. Dataset Preparation
  4. Execution Instructions
  5. Results
  6. Citation

News

  • [February 27, 2023]: OneFormer is accepted to CVPR 2023!
  • [January 26, 2023]: OneFormer sets new SOTA performance on the the Mapillary Vistas val (both panoptic & semantic segmentation) and Cityscapes test (panoptic segmentation) sets. We’ve released the checkpoints too!
  • [January 19, 2023]: OneFormer is now available as a part of the 🤗 HuggingFace transformers library and model hub! 🚀
  • [December 26, 2022]: Checkpoints for Swin-L OneFormer and DiNAT-L OneFormer trained on ADE20K with 1280×1280 resolution released!
  • [November 23, 2022]: Roboflow cover OneFormer on YouTube! Thanks to @SkalskiP for making the video!
  • [November 18, 2022]: Our demo is available on 🤗 Huggingface Space!
  • [November 10, 2022]: Project Page, ArXiv Preprint and GitHub Repo are public!
    • OneFormer sets new SOTA on Cityscapes val with single-scale inference on Panoptic Segmentation with 68.5 PQ score and Instance Segmentation with 46.7 AP score!
    • OneFormer sets new SOTA on ADE20K val on Panoptic Segmentation with 51.5 PQ score and on Instance Segmentation with 37.8 AP!
    • OneFormer sets new SOTA on COCO val on Panoptic Segmentation with 58.0 PQ score!

Installation Instructions

  • We use Python 3.8, PyTorch 1.10.1 (CUDA 11.3 build).
  • We use Detectron2-v0.6.
  • For complete installation instructions, please see INSTALL.md.

Dataset Preparation

  • We experiment on three major benchmark dataset: ADE20K, Cityscapes and COCO 2017.
  • Please see Preparing Datasets for OneFormer for complete instructions for preparing the datasets.

Execution Instructions

Training

  • We train all our models using 8 A6000 (48 GB each) GPUs.
  • We use 8 A100 (80 GB each) for training Swin-L OneFormer and DiNAT-L OneFormer on COCO and all models with ConvNeXt-XL backbone. We also train the 896x896 models on ADE20K on 8 A100 GPUs.
  • Please see Getting Started with OneFormer for training commands.

Evaluation

Demo

  • We provide quick to run demos on Colab Open In Colab and Hugging Face Spaces Huggingface space.
  • Please see OneFormer Demo for command line instructions on running the demo.

Results

Results

  • † denotes the backbones were pretrained on ImageNet-22k.
  • Pre-trained models can be downloaded following the instructions given under tools.

ADE20K

MethodBackboneCrop SizePQAPmIoU
(s.s)
mIoU
(ms+flip)
#paramsconfigCheckpoint
OneFormerSwin-L640×64049.835.957.057.7219Mconfigmodel
OneFormerSwin-L896×89651.137.657.458.3219Mconfigmodel
OneFormerSwin-L1280×128051.437.857.057.7219Mconfigmodel
OneFormerConvNeXt-L640×64050.036.256.657.4220Mconfigmodel
OneFormerDiNAT-L640×64050.536.058.358.4223Mconfigmodel
OneFormerDiNAT-L896×89651.236.858.158.6223Mconfigmodel
OneFormerDiNAT-L1280×128051.537.158.358.7223Mconfigmodel
OneFormer (COCO-Pretrained)DiNAT-L1280×128053.440.258.458.8223Mconfigmodel | pretrained
OneFormerConvNeXt-XL640×64050.136.357.458.8372Mconfigmodel

Cityscapes

MethodBackbonePQAPmIoU
(s.s)
mIoU
(ms+flip)
#paramsconfigCheckpoint
OneFormerSwin-L67.245.683.084.4219Mconfigmodel
OneFormerConvNeXt-L68.546.583.084.0220Mconfigmodel
OneFormer (Mapillary Vistas-Pretrained)ConvNeXt-L70.148.784.685.2220Mconfigmodel | pretrained
OneFormerDiNAT-L67.645.683.184.0223Mconfigmodel
OneFormerConvNeXt-XL68.446.783.684.6372Mconfigmodel
OneFormer (Mapillary Vistas-Pretrained)ConvNeXt-XL69.748.984.585.8372Mconfigmodel | pretrained

COCO

MethodBackbonePQPQThPQStAPmIoU#paramsconfigCheckpoint
OneFormerSwin-L57.964.448.049.067.4219Mconfigmodel
OneFormerDiNAT-L58.064.348.449.268.1223Mconfigmodel

Mapillary Vistas

MethodBackbonePQmIoU
(s.s)
mIoU
(ms+flip)
#paramsconfigCheckpoint
OneFormerSwin-L46.762.964.1219Mconfigmodel
OneFormerConvNeXt-L47.963.263.8220Mconfigmodel
OneFormerDiNAT-L47.864.064.9223Mconfigmodel

Citation

If you found OneFormer useful in your research, please consider starring ⭐ us on GitHub and citing 📚 us in your research!

@inproceedings{jain2023oneformer,
      title={{OneFormer: One Transformer to Rule Universal Image Segmentation}},
      author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
      journal={CVPR}, 
      year={2023}
    }

Acknowledgement

We thank the authors of Mask2Former, GroupViT, and Neighborhood Attention Transformer for releasing their helpful codebases.