README.md

January 10, 2018 · View on GitHub

This is a learning project trying to implement some varants of SSD in pytorch. SSD is a one-stage object detector, probably "currently the best detector with respect to the speed-vs-accuracy trade-off". There are many follow-up papers that either further improve the detection accuracy, or incorporate techniques like image segmentation to be used for Scene Understanding(e.g. BlitzNet), or modify SSD to detect rotatable objects(e.g. DRBox), or apply SSD to 3d object detection(e.g. Frustum PointNets):

  • SSD - "SSD: Single Shot MultiBox Detector" (2016) arXiv:1512.02325 , github
  • DSSD - "DSSD : Deconvolutional Single Shot Detector" (2017) arXiv:1701.06659
  • RRC - "Accurate Single Stage Detector Using Recurrent Rolling Convolution" (2017) arXiv:1704.05776 , github
  • RUN - "Residual Features and Unified Prediction Network for Single Stage Detection" (2017) arXiv:1707.05031
  • DSOD - "DSOD: Learning Deeply Supervised Object Detectors from Scratch" (2017) arXiv:1708.01241 , github
  • BlitzNet - "BlitzNet: A Real-Time Deep Network for Scene Understanding" (2017) arXiv:1708.02813 , github
  • RefineDet - "Single-Shot Refinement Neural Network for Object Detection" (2017) arXiv:1711.06897 , github
  • DRBox - "Learning a Rotation Invariant Detector with Rotatable Bounding Box" (2017) arXiv:1711.09405 , github
  • Frustum PointNets - "Frustum PointNets for 3D Object Detection from RGB-D Data" (2017) arXiv:1711.08488

Overview

Modelpublish timeBackboneinput sizeBoxesFPSVOC07VOC12COCO
SSD3002016VGG-16300 × 30087324677.275.925.1
SSD5122016VGG-16512 × 512245641979.878.528.8
SSD3212017.01ResNet-101321 × 3211708011.277.175.428.0
SSD5132017.01ResNet-101513 × 513436886.880.679.431.2
DSSD3212017.01ResNet-101321 × 321170809.578.676.328.0
DSSD5132017.01ResNet-101513 × 513436885.581.580.033.2
RUN3002017.07VGG-16300 × 3001164064 (Pascal)79.177.0
DSOD3002017.08DS/64-192-48-1300 × 30017.477.776.329.3
BlitzNet3002017.08ResNet-50300 × 300453902478.575.429.7
BlitzNet5122017.08ResNet-50512 × 5123276619.580.779.034.1
RefineDet3202017.11VGG-16320 × 320637540.380.078.129.4
RefineDet5122017.11VGG-16512 × 5121632024.181.880.033.0
RefineDet3202017.11ResNet-101320 × 32032.0
RefineDet5122017.11ResNet-101512 × 51236.4
RRC2017.04VGG-161272 × 375
DRBox2017.11VGG-16300 × 300
Frustum PointNets rgb part2017.11VGG-161280 × 384
  • FPS: # of processed images per second on Titan X GPU (batch size is 1)
  • VOC07: PASCAL 2007 detection results(mAP), training data: 07+12(07 trainval + 12 trainval)
  • VOC12: PASCAL 2012 detection results(mAP), training data: 07++12(07 trainval + 07 test + 12 trainval)
  • COCO: MS COCO 2015 test-dev detection results(mAP@[0.5:0.95]), train on trainval35k

All backbone networks above have been pre-trained on ImageNet CLS-LOC dataset, except DSOD, it's "training from scratch".

Implemented

  • SSD
  • RRC
  • RUN
  • DSOD
  • BlitzNet (detection part)
  • DRBox
  • Frustum PointNets

Note: "Implemented" above means the code of the model is almost done, it doesn't mean I have trained it, or even reproduced the results of original paper. Actually, I have only trained SSD300 on VOC07, the best result I got is 76.5%, lower than 77.2% reported in SSD paper. I'll continue this project when I find out what's the problem.

Requirements

  • Python 3.6+
  • numpy
  • cv2
  • pytorch
  • tensorboardX

Dataset

Download dataset VOC2007 and VOC2012, put them under VOCdevkit directory:

VOCdevkit
-| VOC2007
   -| Annotations
   -| ImageSets
   -| JPEGImages
   -| SegmentationClass
   -| SegmentationObject
-| VOC2012
   -| Annotations
   -| ImageSets
   -| JPEGImages
   -| SegmentationClass
   -| SegmentationObject

Usage

train:

python train.py --cuda --voc_root path/to/your/VOCdevkit --backbone path/to/your/vgg16_reducedfc.pth
The backbone network vgg16_reducedfc.pth is from repo amdegroot/ssd.pytorch (download link: https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth).

evaluate:

python train.py --cuda --test --voc_root path/to/your/VOCdevkit --checkpoint path/to/your/xxx.pth

show demo:

python train.py --cuda --demo --voc_root path/to/your/VOCdevkit --checkpoint path/to/your/xxx.pth

Results

VOC07 mAP

modelsmy resultpaper result
SSD30076.5%77.2%

to be continued

Reference