SODFormer: Streaming Object Detection with Transformers Using Events and Frames

July 21, 2023 · View on GitHub

This is the official implementation of SODFormer, a novel multimodal streaming object detector with transformers. For more details, please refer to:

SODFormer: Streaming Object Detection with Transformers Using Events and Frames
Dianze Li, Jianing Li, and Yonghong Tian, Fellow, IEEE image

Setup

This code has been tested with Python 3.9, Pytorch 1.7, CUDA 10.1 and cuDNN 7.6.3 on Ubuntu 16.04.

  • Clone the repository
git clone --depth=1 https://github.com/dianzl/SODFormer.git && cd SODFormer
  • Setup python environment

    We recommend you to use Anaconda to create a conda environment:

conda create -n sodformer python=3.9 pip
source activate sodformer
  • Install pytorch
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
  • Other requirements
pip install -r requirements.txt
  • Compiling CUDA operators
cd ./models/ops
sh ./make.sh
# unit test (should see all checking is True)
python test.py

Data preparation

Please download PKU-DAVIS-SOD Dataset and organize them as follows:

code_root/
└── data/
    ├── raw/
        ├── train/
        ├── val/
        ├── test/
            ├── normal/
                └── 001_test_normal.aedat4
            ├── low_light/
            └── motion_blur/
    └── annotations/
        ├── train/
        ├── val/
        └── test/

For the purpose of saving memory and accelerate data reading, we first convert the raw .aedat4 files to synrhronous frames (.png) and events (.npy) as follows:

python ./data/davis346_temporal_event_to_npy.py
python ./data/davis346_to_images.py

After running this two files, the data should be automatically organized as:

code_root/
└── data/
    ├── aps_frames/
        ├── train/
        ├── val/
        ├── test/
            ├── normal/
                └── 001_test_normal
                    └── 0.png
            ├── low_light/
            └── motion_blur/
    └── events_npys/
        ├── train/
        ├── val/
        └── test/
            ├── normal/
                └── 001_test_normal
                    └── 0.npy
            ├── low_light/
            └── motion_blur/

Usage

Training

Training on single node with single GPU

You can simply run

./configs/sodformer.sh

to train a model with our default parameters. The meaning of each parameter can be found in param.txt.

Training on single node with multi-GPUs

Our code also implements multi-GPUs training on single node. For example, the command for training SODFormer on 2 GPUs is as follows:

GPUS_PER_NODE=2 ./tools/run_dist_launch.sh 2 ./configs/sodformer.sh

Some tips to speed-up training

  • You may increase the batch size to maximize the GPU utilization, according to GPU memory of yours, e.g., set '--batch_size 4' or '--batch_size 8'.
  • Some computation involving MultiScaleDeformableAttention can be accelerated by setting batch size to integer power of 2 (e.g., 4, 8, etc.).

Evaluation

You can get the pretrained models of SODFormer (the link is in "Quantitative results" session), then run following command to evaluate it on PKU-DAVIS-SOD test dataset:

./configs/sodformer.sh --resume <path to pre-trained model> --eval

You can also run distributed evaluation by using

GPUS_PER_NODE=2 ./tools/run_dist_launch.sh 2 ./configs/sodformer.sh --resume <path to pre-trained model> --eval

Asynchronous prediction

Our code supports prediction of asynchronous streams of frame and event of one video. Similarly, we first generate asynchronous frames (.png) and events (.npy) from raw .aedat4 files (with 001_test_normal.aedat4 as instance):

python ./data/asyn_event_npy.py --scene normal --filename 001_test_normal.aedat4

After running the above command, the asynchronous data of 001_test_normal.aedat4 should be automatically organized as:

code_root/
└── data/
    └── asyn/
        ├── events_npys/
            └── test/
                └── normal/
                    └── 001_test_normal
                        └── 0.npy
        └── davis_images/
            └── test/
                └── normal/
                    └── 001_test_normal
                        └── 0.png

The asynchronous prediction and visualization can be done as follows:

./configs/prediction.sh --resume <path to pre-trained model> --scene normal --datasetname 1 --vis_dir <path to save prediction images>

Main results

Quantitative results

ModalityMethodTemporal cuesInput representationAP_50\_{50}Runtime (ms)URL
EventsSSD-eventsNEvent image0.2217.2-
EventsNGA-eventsNVoxel grid0.2328.0-
EventsDeformable DETRNEvent image0.30721.6e_nt
EventsSpatio-temporal Deformable DETRYEvent image0.33425.0e_t
FramesYOLOv3NRGB frame0.4267.9-
FramesLSTM-SSDYRGB frame0.45622.4-
FramesDeformable DETRNRGB frame0.46121.5f_nt
FramesSpatio-temporal Deformable DETRYRGB frame0.48924.9f_t
Events + FramesMFEPDNEvent image + RGB frame0.4388.2-
Events + FramesJDFNChannel image + RGB frame0.4428.3-
Events + FramesSODFormerYVoxel grid + RGB frame0.49141.5-
Events + FramesSODFormerYEvent image + RGB frame0.50439.7SODFormer

Demo

Low light demo

Motion blur demo

Synthetic dataset demo

Citation

  1. Deformable DETR: Deformable Transformers for End-to-End Object Detection
  2. TransVOD:End-to-End Video Object Detection with Spatial-Temporal Transformers