YOLOv8-OBB static batch

March 20, 2026 ยท View on GitHub

็ฎ€ไฝ“ไธญๆ–‡ | English

GitHub License GitHub Repo Stars PyPi Version NVIDIA


๐Ÿ”ง trtyolo-export is the official ONNX conversion tool for the TensorRT-YOLO project, providing a simple and user-friendly command-line interface to help you convert already-exported YOLO-family ONNX models into TensorRT-YOLO compatible outputs. The converted ONNX files have pre-registered the required TensorRT plugins (including official and custom plugins, supporting detection, segmentation, pose estimation, OBB, etc.), significantly improving model deployment efficiency.

โœจ Key Features

  • Comprehensive Compatibility: Supports exported ONNX models from YOLOv3 to YOLO26, as well as model families such as YOLO-World and YOLO-Master, covering object detection, instance segmentation, pose estimation, oriented object detection (OBB), and image classification. See ๐Ÿ–ฅ๏ธ Model Support List for details.
  • Built-in Plugins: The converted ONNX files have pre-integrated TensorRT official plugins and custom plugins, fully supporting multi-task scenarios such as detection, segmentation, pose estimation, and OBB, greatly simplifying the deployment process.
  • Flexible Configuration: Provides parameter options such as target opset conversion, threshold tuning, maximum detections, and optional onnxslim simplification to meet different deployment requirements.
  • One-Click Conversion: A concise and intuitive command-line interface with automatic model structure detection, no complex configuration required.

๐Ÿš€ Performance Comparison

ModelOfficial export - Latency 2080Ti TensorRT10 FP16trtyolo-export - Latency 2080Ti TensorRT10 FP16
YOLOv11N1.611 ยฑ 0.0611.428 ยฑ 0.097
YOLOv11S2.055 ยฑ 0.1471.886 ยฑ 0.145
YOLOv11M3.028 ยฑ 0.1672.865 ยฑ 0.235
YOLOv11L3.856 ยฑ 0.2873.682 ยฑ 0.309
YOLOv11X6.377 ยฑ 0.4876.195 ยฑ 0.482

๐Ÿ’จ Quick Start

Installation

In a Python>=3.8 environment, you can quickly install the trtyolo-export package via pip:

pip install trtyolo-export

๐Ÿ”ง Alternative Method: Build from Source

If you need the latest development version or want to make custom modifications, you can build from source:

# Clone the repository (if you don't have a local copy yet)
git clone https://github.com/laugh12321/TensorRT-YOLO

# Enter the project directory (assuming you're already in this directory)
cd TensorRT-YOLO

# Switch to the export branch
git checkout export

# Build and install
pip install build
python -m build
pip install dist/*.whl

Basic Usage

After installation, you can use the conversion functionality through the trtyolo-export command-line tool:

# View installed version
trtyolo-export --version

# View command help information
trtyolo-export --help

# Convert a basic ONNX model
trtyolo-export -i model.onnx -o output/model-trtyolo.onnx

If you need to query the installed version from Python:

python -c "import trtyolo_export; print(trtyolo_export.__version__)"

๐Ÿ› ๏ธ Parameter Description

The trtyolo-export command supports the following parameters:

ParameterDescriptionDefault ValueApplicable Scenarios
--versionShow the installed package version and exit-Confirm the CLI version in the current environment
--verbose, --quietShow or hide conversion progress logs--verboseControl CLI logging verbosity
-i, --inputSource ONNX file path-Required, input must be an existing ONNX file
-o, --outputConverted ONNX output path-Required, output path must end with .onnx; if it matches the input path, -trtyolo is appended automatically
--opsetTarget ONNX opset versionPreserve source opsetConvert the converted model to a specific opset before saving
--max-detsMaximum detections100Control the output size when appending TensorRT NMS plugins
--conf-thresConfidence threshold0.25Used by plugin-based and NMS-free postprocess outputs
--iou-thresIoU threshold0.45Used when appending TensorRT NMS plugins
-s, --simplifyRun onnxslim after conversionFalseSlim the converted ONNX model after graph conversion

Note

The input to trtyolo-export must already be an exported ONNX model. This tool converts ONNX graphs and postprocess outputs; it does not export directly from .pt, .pth, .pdmodel, or .pdiparams.

If -o/--output points to the same file as -i/--input, the tool will emit a warning and automatically rename the output to *-trtyolo.onnx to avoid overwriting the source model.

Official repositories such as YOLOv6, YOLOv7, and YOLOv9 already provide ONNX export functionality with EfficientNMS plugins. If the official output already matches your deployment requirement, an additional conversion step may be unnecessary.

๐Ÿ“ Usage Examples

Conversion Examples

# Convert a basic ONNX model
trtyolo-export -i yolov8s.onnx -o output/yolov8s-trtyolo.onnx

# Convert with a target opset
trtyolo-export -i yolov10s.onnx -o output/yolov10s-trtyolo.onnx --opset 12

# Tune TensorRT NMS plugin parameters
trtyolo-export -i yolo11n-obb.onnx -o output/yolo11n-obb-trtyolo.onnx --max-dets 100 --iou-thres 0.45 --conf-thres 0.25

# Simplify the converted ONNX with onnxslim
trtyolo-export -i yolo12n-seg.onnx -o output/yolo12n-seg-trtyolo.onnx -s

# Convert quietly
trtyolo-export --quiet -i model.onnx -o output/model-trtyolo.onnx

# Avoid overwriting the source file
# If -o and -i are the same path, the actual output becomes yolo11n-pose-trtyolo.onnx
trtyolo-export -i yolo11n-pose.onnx -o yolo11n-pose.onnx

TensorRT Engine Construction

The converted ONNX model can be further built into a TensorRT engine using the trtexec tool for optimal inference performance:

# Static batch
trtexec --onnx=model.onnx --saveEngine=model.engine --fp16

# Dynamic batch
trtexec --onnx=model.onnx --saveEngine=model.engine --minShapes=images:1x3x640x640 --optShapes=images:4x3x640x640 --maxShapes=images:8x3x640x640 --fp16

# ! Note: For segmentation, pose estimation, and OBB models, you need to specify staticPlugins and setPluginsToSerialize parameters to ensure correct loading of custom plugins compiled by the project

# YOLOv8-OBB static batch
trtexec --onnx=yolov8n-obb.onnx --saveEngine=yolov8n-obb.engine --fp16 --staticPlugins=/your/tensorrt-yolo/install/dir/lib/libcustom_plugins.so --setPluginsToSerialize=/your/tensorrt-yolo/install/dir/lib/libcustom_plugins.so

# YOLO11-OBB dynamic batch
trtexec --onnx=yolo11n-obb.onnx --saveEngine=yolo11n-obb.engine --fp16 --minShapes=images:1x3x640x640 --optShapes=images:4x3x640x640 --maxShapes=images:8x3x640x640 --staticPlugins=/your/tensorrt-yolo/install/dir/lib/custom_plugins.dll --setPluginsToSerialize=/your/tensorrt-yolo/install/dir/lib/custom_plugins.dll

๐Ÿ“Š Conversion Structure

The converted ONNX model structure is optimized for TensorRT inference and integrates corresponding plugins (official and custom). The model structures for different task types are as follows:

๐Ÿ–ฅ๏ธ Model Support List

YOLO SeriesSource RepoDetectSegmentClassifyPoseOBB
YOLOv3ultralytics/yolov3โœ…โœ…โœ…--
YOLOv3ultralytics/ultralyticsโœ…----
YOLOv5ultralytics/yolov5โœ…โœ…โœ…--
YOLOv5ultralytics/ultralyticsโœ…----
YOLOv6meituan/YOLOv6๐ŸŸข----
YOLOv6ultralytics/ultralyticsโœ…----
YOLOv7WongKinYiu/yolov7๐ŸŸข----
YOLOv8ultralytics/ultralyticsโœ…โœ…โœ…โœ…โœ…
YOLOv9WongKinYiu/yolov9๐ŸŸขโœ…---
YOLOv9ultralytics/ultralyticsโœ…โœ…---
YOLOv10THU-MIG/yolov10โœ…----
YOLOv10ultralytics/ultralyticsโœ…----
YOLO11ultralytics/ultralyticsโœ…โœ…โœ…โœ…โœ…
YOLO12sunsmarterjie/yolov12โœ…โœ…โœ…--
YOLO12ultralytics/ultralyticsโœ…โœ…โœ…โœ…โœ…
YOLO13iMoonLab/yolov13โœ…----
YOLO26ultralytics/ultralyticsโœ…โœ…โœ…โœ…โœ…
YOLO-Worldultralytics/ultralyticsโœ…----
YOLOETHU-MIG/yoloeโœ…โœ…---
YOLOEultralytics/ultralyticsโœ…โœ…---
YOLO-MasterisLinXu/YOLO-Masterโœ…โœ…โœ…--

Symbol Explanation: โœ… means trtyolo-export can convert and can inference | ๐ŸŸข means the upstream or repository export path can be used directly for inference | - means this task is not provided | โŽ means not supported

โ“ Frequently Asked Questions

1. Why do some models require referring to official export tutorials?

Official repositories for models like YOLOv6, YOLOv7, and YOLOv9 already provide ONNX export functionality with EfficientNMS plugins. If those official ONNX files already satisfy your deployment requirements, you may not need an extra conversion step.

2. Which conversion parameters should be adjusted first?

  • --max-dets limits the number of final detections produced by appended TensorRT NMS plugins
  • --conf-thres filters low-confidence predictions in plugin-based and NMS-free outputs
  • --iou-thres controls overlap suppression when TensorRT NMS plugins are appended
  • -s, --simplify runs onnxslim; if your downstream toolchain is sensitive, retry without it
  • --opset is only needed when your downstream runtime requires a specific ONNX opset version

3. What to do if errors are encountered during the conversion process?

  • Ensure that the correct version of dependent libraries is installed in your environment
  • Check that the input ONNX file exists and the output path ends with .onnx
  • Confirm that the exported ONNX graph you are using is in the support list
  • If opset conversion fails, retry without --opset or choose a compatible version
  • For models with custom graph modifications, ensure the exported ONNX structure still matches one of the supported conversion patterns