Unifying Fine-grained Perception into MLLMs w/o Task Decoders. 16 tokens enable precise segmentation

October 1, 2025 ยท View on GitHub

hf_paper arXiv License Hits GitHub issues GitHub closed issues

This repo is the official implementation of paper: ๐Ÿ›ธ UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface as well as the follow-ups. We have made every effort to ensure that the codebase is clean, concise, easily readable, state-of-the-art, and relies only on minimal dependencies.

UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface

Hao Tang, Chenwei Xie , Haiyang Wang, Xiaoyi Bao, Tingyu Weng, Pandeng Li, Yun Zhengโ€ ^\dagger, Liwei Wang โ€ ^\dagger

๐Ÿ“ฃ News

  • [25-10-1] We release checkpoints of UFO-InternVL2.5-8B in repo.
  • [25-9-19] ๐Ÿ”ฅ UFO is accepted by NeurIPS 2025 as a Spotlight!
  • [25-3-12] We release separate repos of UFO-InternVL2-8B and add REC inference on InternVL repo.
  • [25-3-4] ๐Ÿš€ Training and inference Code is released.
  • [25-3-3] ๐Ÿ‘€ UFO is released on arXiv.

Overview

๐Ÿ‘€ Todo

  • Release the arXiv version.
  • Release code and models of multi-task training on UFO-ViT.
  • Release code and models of fine-grained instruction tuning on UFO-InternVL2.5-8B and UFO-LLaVA-1.5-7B.
  • Release full code and models of multi-task training on UFO-InternVL2.5-8B.

๐Ÿค” Introduction

Previous efforts to introduce fine-grained perception tasks into MLLMs rely heavily on task-specific decoders or suboptimal formats (e.g., polygons), impeding the visual unified modeling. To overcome this, we propose UFO:

  • ๐Ÿ˜ฎ We reformulate segmentation as embedding retrieval, where the mask token embedding computes similarity with image features by dot product, retrieving high-similarity positions to generate the mask.

  • ๐Ÿš€ We first explore the image representation capabilities of MLLMs. We argue that since MLLMs excel in understanding, the mask information is also in the image features and we just need to retrieve it.

  • ๐Ÿค— Fully aligned with open-ended Language interface: UFO unifies detection and segmentation through the open-ended language interface without any additional decoders, enabling seamless integration with MLLMs.

  • ๐Ÿ”ฅ Competitive performance: UFO surpasses GiT, a text-based generalist model, by 12.3 mAP on COCO instance segmentation and 3.3 mIoU on ADE20K. It also matches or exceeds decoder-based methods in various grounding tasks, eliminating the need for task-specific decoders.

๐Ÿš€ Main Results

Single-Task Benchmark

ModelParamsMetricPerfomanceckptconfig
UFO-ViT-Bdetection131MmAP47.8ckptconfig
UFO-ViT-Binsseg131MmAP42.6ckptconfig
UFO-ViT-Bsemseg131MmIoU49.5ckptconfig
UFO-ViT-Bcaption131MBLEU-434.2ckptconfig
UFO-ViT-Bgrounding131MAcc@0.583.6ckptconfig

Multi-Task Benchmark

ModelParamsDetectionIns SegSem SegCaptionGroundingckptconfig
UFO-ViT-Bmulti-task131M48.343.550.235.385.8ckptconfig
UFO-ViT-Lmulti-task387M52.947.354.035.988.5ckptconfig
UFO-ViT-Hmulti-task756M54.148.155.737.689.2ckptconfig

Task Synergy in Multi-Tasking Training

ModelParamsDetectionIns SegSem SegCaptionGrounding
UFO-Bsingle-task131M47.842.649.534.283.6
Improvement+0.5+0.9+0.7+1.1+2.2
UFO-Bmulti-task131M48.343.550.235.385.8

MLLM Performance on Multi-Task Benchmark

UFO-InternVL2.5-8B:

ResolutionDetectionIns SegSem SegCaptionGroundingckptconfig
448x44844.037.453.939.690.4ckptconfig
896x89650.943.654.6--ckptconfig
1344x134451.945.2---ckptconfig

Visual Grounding

RefCOCO Validation Set

ModelRECRESckptconfig
UFO-LLaVA-1.5-7B89.976.2ckptconfig
UFO-LLaVA-1.5-7B (ft)90.877.2ckptconfig
UFO-InternVL2.5-8B91.880.0ckptconfig
UFO-InternVL2.5-8B (ft)93.181.0ckptconfig

Reasoning Segmentation

ModelOverallShort QueryLong Queryckptconfig
UFO-LLaVA-1.5-7B53.840.158.2ckptconfig
UFO-LLaVA-1.5-7B (ft)58.046.361.7ckptconfig
UFO-InternVL2.5-8B60.048.763.6ckptconfig
UFO-InternVL2.5-8B (ft)67.056.270.4ckptconfig

๐Ÿ› ๏ธ Quick Start

Installation

conda create -n UFO python=3.11

conda activate UFO

pip install torch==2.1.0+cu118 torchvision==0.16.0+cu118 -f https://download.pytorch.org/whl/torch_stable.html
pip install -U openmim
mim install "mmengine==0.8.3"
mim install "mmcv==2.1.0"
pip install "transformers==4.37.2"

git clone git@github.com:nnnth/UFO.git
cd UFO

pip install -v -e .
pip install -r requirements/optional.txt
pip install -r requirements/runtime.txt
  • (Optional) Install Java manually for image caption evaluation. Without Java, you can train image caption normally, but fail in caption evaluation.

Dataset Preparation

Multi-Tasking Dataset

We follow GiT to prepare the multi-task datasets. Please refer here for more details.

Instruction Tuning Dataset

We use 24 datasets in for instruction tuning on MLLMs. For more details, please refer here.

Download Pretraining Weight

We use LLaVA-1.5-7B and InternVL2.5-8B as MLLM pretraining. In multi-task training on UFO-ViT, we also use Bert Tokenizer and Bert Embeddings. Please download and organize them as follows:

UFO
|โ”€โ”€ckpt
|โ”€โ”€|โ”€โ”€llava-1.5-7b-hf
|โ”€โ”€|โ”€โ”€InternVL2_5-8B
|โ”€โ”€|โ”€โ”€bert-base-uncased
|โ”€โ”€|โ”€โ”€bert_embed_womask.pt
|โ”€โ”€|โ”€โ”€bert_embed.pt
|โ”€โ”€|โ”€โ”€bert_embed_large.pt
|โ”€โ”€|โ”€โ”€bert_embed_huge.pt

For InternVL2_5-8B, we add a custom function for lora training. Please replace the original file following issue.

Demo

Please download checkpoints from kanashi6/UFO, then save them under root dir:

UFO
|โ”€โ”€ufo-vit-b-single-det.pth
|โ”€โ”€ufo-vit-b-single-insseg.pth
|โ”€โ”€...

Run demo on detection (coco):

python demo.py --img_path demo/demo.jpg --config configs/UFO-ViT/single_detection_base.py \
  --ckpt_path ./ufo-vit-b-single-det.pth --out_dir ./vis/ --task detection

Run demo on RES:

python demo.py --img_path demo/demo.jpg --config configs/UFO-InternVL2_5-8B/internvl2_5_8b_res_ft_2w.py \
  --ckpt_path ./ufo-internvl2_5-8b-res.pth --out_dir ./vis/ --task res --text bench

Scripts

For Training and evaluation commands, please refer here.

๐Ÿ‘ Acknowledgement

  • MMDetection The codebase we built upon. Thanks for providing such a convenient framework.
  • GiT We use the multi-task benchmark established by GiT.
  • InternVL We borrow codes of MLLMs from InternVL repo.

๐Ÿ“˜ Citation

Please consider citing our work as follows if it is helpful.

@article{tang2025ufo,
    title={UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface},
    author={ Hao Tang, Chenwei Xie, Haiyang Wang, Xiaoyi Bao, Tingyu Weng, Pandeng Li, Yun Zheng, Liwei Wang},
    journal={arXiV:2503.01342},
    year={2025}
}

โœจ Star History

Star History Chart