๐ง EVOLVE : Event-Guided Deformable Feature Transfer and Dual-Memory Refinement for Low-Light Video Object Segmentation (ICCV 2025)
March 24, 2026 ยท View on GitHub
News
- [New] We upload the LLE-VOS qualititative result on Google Drive Link
- [New] We upload the LLE-DAVIS qualititative result on Google Drive Link
๐ Key Features
-
๐ฏ Event-guided Deformable Feature Transfer Module
-
๐ Dual-Memory Object Transformer
-
๐งฉ Memory Refinement Module
Data preparation & Installation
See Datasets & Installation
Training Command
We trained with four A6000 GPUs, which took around 10 hours on LLE-VOS.
OMP_NUM_THREADS=4 torchrun --master_port 25357 --nproc_per_node=4 evolve/train.py exp_id=[some unique id] model=base data=base
- Change
nproc_per_nodeto change the number of GPUs. - Prepend
CUDA_VISIBLE_DEVICES=...if you want to use specific GPUs. - Change
master_portif you encounter port collision. exp_idis a unique experiment identifier that does not affect how the training is done.- Models and visualizations will be saved in
./output/. - To load a pre-trained model, e.g., to continue main training from the final model from pre-training, specify
weights=[path to the model].
Evaluation Command
python cutie/eval_vos.py dataset=[dataset] weights=[path to model file] model=[small/base]
- Possible options for
dataset: seeconfig/eval_config.yaml. - We evaulate our models on base model setting.
Qualititative Result
-
LLE-VOS Dataset
-
LLE-DAVIS Dataset
Citation
@InProceedings{Baek_2025_ICCV,
author = {Baek, Jong-Hyeon and Oh, Jiwon and Koh, Yeong Jun},
title = {EVOLVE: Event-Guided Deformable Feature Transfer and Dual-Memory Refinement for Low-Light Video Object Segmentation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {11273-11282}
}