Getting Started

July 14, 2022 · View on GitHub

English | 简体中文

Getting Started

Installation

For setting up the running environment, please refer to installation instructions.

Data preparation

  • Please refer to PrepareDetDataSet for data preparation
  • Please set the data path for data configuration file in configs/datasets

Training & Evaluation & Inference

PaddleDetection provides scripts for training, evalution and inference with various features according to different configure. And for more distribued training details see [DistributedTraining].(./DistributedTraining_en.md)

# training on single-GPU
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml
# training on multi-GPU
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml
# training on multi-machines and multi-GPUs
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
$fleetrun --ips="10.127.6.17,10.127.5.142,10.127.45.13,10.127.44.151" --selected_gpu 0,1,2,3,4,5,6,7 tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml
# GPU evaluation
export CUDA_VISIBLE_DEVICES=0
python tools/eval.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams
# Inference
python tools/infer.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml --infer_img=demo/000000570688.jpg -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams

Other argument list

list below can be viewed by --help

FLAGscript supporteddescriptiondefaultremark
-cALLSelect config fileNonerequired, such as -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml
-oALLSet parameters in configure fileNone-o has higher priority to file configured by -c. Such as -o use_gpu=False
--evaltrainWhether to perform evaluation in trainingFalseset --eval if needed
-r/--resume_checkpointtrainCheckpoint path for resuming trainingNonesuch as -r output/faster_rcnn_r50_1x_coco/10000
--slim_configALLConfigure file of slim methodNonesuch as --slim_config configs/slim/prune/yolov3_prune_l1_norm.yml
--use_vdltrain/inferWhether to record the data with VisualDL, so as to display in VisualDLFalseVisualDL requires Python>=3.5
--vdl_log_dirtrain/inferVisualDL logging directory for imagetrain:vdl_log_dir/scalar infer: vdl_log_dir/imageVisualDL requires Python>=3.5
--output_evalevalDirectory for storing the evaluation outputNonesuch as --output_eval=eval_output, default is current directory
--json_evalevalWhether to evaluate with already existed bbox.json or mask.jsonFalseset --json_eval if needed and json path is set in --output_eval
--classwiseevalWhether to eval AP for each class and draw PR curveFalseset --classwise if needed
--output_dirinferDirectory for storing the output visualization files./outputsuch as --output_dir output
--draw_thresholdinferThreshold to reserve the result for visualization0.5such as --draw_threshold 0.7
--infer_dirinferDirectory for images to perform inference onNoneOne of infer_dir and infer_img is requied
--infer_imginferImage pathNoneOne of infer_dir and infer_img is requied, infer_img has higher priority over infer_dir
--save_resultsinferWhether to save detection results to fileFalseOptional

Examples

Training

  • Perform evaluation in training

    export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
    python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml --eval
    

    Perform training and evalution alternatively and evaluate at each end of epoch. Meanwhile, the best model with highest MAP is saved at each epoch which has the same path as model_final.

    If evaluation dataset is large, we suggest modifing snapshot_epoch in configs/runtime.yml to decrease evaluation times or evaluating after training.

  • Fine-tune other task

    When using pre-trained model to fine-tune other task, pretrain_weights can be used directly. The parameters with different shape will be ignored automatically. For example:

    export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
    # If the shape of parameters in program is different from pretrain_weights,
    # then PaddleDetection will not use such parameters.
    python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \
                             -o pretrain_weights=output/faster_rcnn_r50_1x_coco/model_final \
    
NOTES
  • CUDA_VISIBLE_DEVICES can specify different gpu numbers. Such as: export CUDA_VISIBLE_DEVICES=0,1,2,3.
  • Dataset will be downloaded automatically and cached in ~/.cache/paddle/dataset if not be found locally.
  • Pretrained model is downloaded automatically and cached in ~/.cache/paddle/weights.
  • Checkpoints are saved in output by default, and can be revised from save_dir in configs/runtime.yml.

Evaluation

  • Evaluate by specified weights path and dataset path

    export CUDA_VISIBLE_DEVICES=0
    python -u tools/eval.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \
                            -o weights=https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_fpn_1x_coco.pdparams
    

    The path of model to be evaluted can be both local path and link in MODEL_ZOO.

  • Evaluate with json

    export CUDA_VISIBLE_DEVICES=0
    python tools/eval.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \
               --json_eval \
               -output_eval evaluation/
    

    The json file must be named bbox.json or mask.json, placed in the evaluation/ directory.

Inference

  • Output specified directory && Set up threshold

    export CUDA_VISIBLE_DEVICES=0
    python tools/infer.py -c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml \
                        --infer_img=demo/000000570688.jpg \
                        --output_dir=infer_output/ \
                        --draw_threshold=0.5 \
                        -o weights=output/faster_rcnn_r50_fpn_1x_coco/model_final \
                        --use_vdl=True
    

    --draw_threshold is an optional argument. Default is 0.5. Different thresholds will produce different results depending on the calculation of NMS.

Deployment

Please refer to depolyment

Model Compression

Please refer to slim