small-object-detection-benchmark

December 20, 2024 ยท View on GitHub

ci fcakyon twitter

๐Ÿ”ฅ our paper has been presented in ICIP 2022 Bordeaux, France (16-19 October 2022)

๐Ÿ“œ List of publications that cite this work (currently 300+)

summary

small-object-detection benchmark on visdrone and xview datasets using fcos, vfnet and tood detectors

refer to Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection for full technical analysis

citation

If you use any file/result from this repo in your work, please cite it as:

@article{akyon2022sahi,
  title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection},
  author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin},
  journal={2022 IEEE International Conference on Image Processing (ICIP)},
  doi={10.1109/ICIP46576.2022.9897990},
  pages={966-970},
  year={2022}
}

visdrone results

refer to table 1 in Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection for more detail on visdrone results

setupAP50AP50sAP50mAP50lresultscheckpoints
FCOS+FI25.814.239.645.1downloadrequest
FCOS+SAHI+PO29.018.941.546.4downloadrequest
FCOS+SAHI+FI+PO31.019.844.649.0downloadrequest
FCOS+SF+SAHI+PO38.125.754.856.9downloaddownload
FCOS+SF+SAHI+FI+PO38.525.955.459.8downloaddownload
---------------------
VFNet+FI28.816.844.047.5downloadrequest
VFNet+SAHI+PO32.021.445.845.5downloadrequest
VFNet+SAHI+FI+PO33.922.449.149.4downloadrequest
VFNet+SF+SAHI+PO41.929.758.860.6downloadrequest
VFNet+SF+SAHI+FI+PO42.229.659.263.3downloadrequest
---------------------
TOOD+FI29.418.144.150.0downloadrequest
TOOD+SAHI31.922.644.045.2downloadrequest
TOOD+SAHI+PO32.522.845.243.6downloadrequest
TOOD+SAHI+FI34.623.848.553.1downloadrequest
TOOD+SAHI+FI+PO34.723.848.950.3downloadrequest
TOOD+SF+FI36.824.453.866.4downloaddownload
TOOD+SF+SAHI42.531.658.061.1downloaddownload
TOOD+SF+SAHI+PO43.131.759.060.2downloaddownload
TOOD+SF+SAHI+FI43.431.759.665.6downloaddownload
TOOD+SF+SAHI+FI+PO43.531.759.865.4downloaddownload

xview results

refer to table 2 in Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection for more detail on xview results

setupAP50AP50sAP50mAP50lresultscheckpoints
FCOS+FI2.200.101.807.30downloadrequest
FCOS+SF+SAHI15.811.918.411.0downloaddownload
FCOS+SF+SAHI+PO17.112.220.212.8downloaddownload
FCOS+SF+SAHI+FI15.711.918.414.3downloaddownload
FCOS+SF+SAHI+FI+PO17.012.220.215.8downloaddownload
---------------------
VFNet+FI2.100.501.806.80downloadrequest
VFNet+SF+SAHI16.011.917.613.1downloaddownload
VFNet+SF+SAHI+PO17.713.719.715.4downloaddownload
VFNet+SF+SAHI+FI15.811.917.515.2downloaddownload
VFNet+SF+SAHI+FI+PO17.513.719.617.6downloaddownload
---------------------
TOOD+FI2.100.102.005.20downloadrequest
TOOD+SF+SAHI19.414.622.514.2downloaddownload
TOOD+SF+SAHI+PO20.614.923.617.0downloaddownload
TOOD+SF+SAHI+FI19.214.622.314.7downloaddownload
TOOD+SF+SAHI+FI+PO20.414.923.517.6downloaddownload

env setup

install pytorch:

conda install pytorch=1.10.0 torchvision=0.11.1 cudatoolkit=11.3 -c pytorch

install other requirements:

pip install -r requirements.txt

evaluation

  • download desired checkpoint from the urls in readme.

  • download xivew or visdrone dataset and convert to COCO format.

  • set MODEL_PATH, MODEL_CONFIG_PATH, EVAL_IMAGES_FOLDER_DIR, EVAL_DATASET_JSON_PATH, INFERENCE_SETTING in predict_evaluate_analyse script then run the script.

roadmap

  • add train test split support for xview to coco converter
  • add mmdet config files (fcos, vfnet and tood) for xview training (9 train experiments)
  • add mmdet config files (fcos, vfnet and tood) for visdrone training (9 train experiments)
  • add coco result.json files, classwise coco eval results error analysis plots for all xview experiments
  • add coco result.json files, classwise coco eval results error analysis plots for all visdrone experiments
  • add .py scripts for inference + evaluation + error analysis using sahi