Introduction
October 31, 2024 ยท View on GitHub
RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation
Yonglin Li, Jing Zhang, Xiao Teng, Long Lan, Xinwang Liu
Abstract | Usage | Results | Statement
Introduction
This is the official repository of the paper RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation
In this study, we present the RefSAM model, which for the first time explores the potential of SAM for RVOS by incorporating multi-view information from diverse modalities and successive frames at different timestamps. Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-Modal MLP that projects the text embedding of the referring expression into sparse and dense embeddings, serving as user-interactive prompts. Subsequently, a parameter-efficient tuning strategy is employed to effectively align and fuse the language and vision features. Through comprehensive ablation studies, we demonstrate the practical and effective design choices of our strategy. Extensive experiments conducted on Ref-Youtu-VOS and Ref-DAVIS17 datasets validate the superiority and effectiveness of our RefSAM model over existing methods.
Usage
Results
Results on RVOS datasets
A comprehensive comparison between RefSAM and existing methods.
Visualization Results
Visualization results of our RefSAM model on Ref-DAVIS17.
We show the visualization results of our RefSAM model. It can be seen that RefSAM is capable of effectively segmenting and tracking the referred object even in challenging scenarios, such as variations in person poses, and occlusions between instances.
Visualization of different models on Ref-DAVIS17.
Furthermore, we present the results of differnt models. It is clear that our RefSAM demonstrates significantly enhanced cross-modal understanding capability.
Model Analysis
The influence of different learning rates for the learnable modules
Ablation study of different module designs.
Ablation study of the key components
Influence of the model size of Visual Encoder
Number of learnable parameters of different Models
Inference speed of different models.
Statement
This project is for research purpose only. For any other questions please contact yonglin_edu@163.com.