YOLOv8-OBB static batch
March 20, 2026 ยท View on GitHub
็ฎไฝไธญๆ | English
๐ง trtyolo-export is the official ONNX conversion tool for the TensorRT-YOLO project, providing a simple and user-friendly command-line interface to help you convert already-exported YOLO-family ONNX models into TensorRT-YOLO compatible outputs. The converted ONNX files have pre-registered the required TensorRT plugins (including official and custom plugins, supporting detection, segmentation, pose estimation, OBB, etc.), significantly improving model deployment efficiency.
โจ Key Features
- Comprehensive Compatibility: Supports exported ONNX models from YOLOv3 to YOLO26, as well as model families such as YOLO-World and YOLO-Master, covering object detection, instance segmentation, pose estimation, oriented object detection (OBB), and image classification. See ๐ฅ๏ธ Model Support List for details.
- Built-in Plugins: The converted ONNX files have pre-integrated TensorRT official plugins and custom plugins, fully supporting multi-task scenarios such as detection, segmentation, pose estimation, and OBB, greatly simplifying the deployment process.
- Flexible Configuration: Provides parameter options such as target opset conversion, threshold tuning, maximum detections, and optional
onnxslimsimplification to meet different deployment requirements. - One-Click Conversion: A concise and intuitive command-line interface with automatic model structure detection, no complex configuration required.
๐ Performance Comparison
| Model | Official export - Latency 2080Ti TensorRT10 FP16 | trtyolo-export - Latency 2080Ti TensorRT10 FP16 |
|---|---|---|
| YOLOv11N | 1.611 ยฑ 0.061 | 1.428 ยฑ 0.097 |
| YOLOv11S | 2.055 ยฑ 0.147 | 1.886 ยฑ 0.145 |
| YOLOv11M | 3.028 ยฑ 0.167 | 2.865 ยฑ 0.235 |
| YOLOv11L | 3.856 ยฑ 0.287 | 3.682 ยฑ 0.309 |
| YOLOv11X | 6.377 ยฑ 0.487 | 6.195 ยฑ 0.482 |
๐จ Quick Start
Installation
๐ฆ Recommended Method: Install via pip
In a Python>=3.8 environment, you can quickly install the trtyolo-export package via pip:
pip install trtyolo-export
๐ง Alternative Method: Build from Source
If you need the latest development version or want to make custom modifications, you can build from source:
# Clone the repository (if you don't have a local copy yet)
git clone https://github.com/laugh12321/TensorRT-YOLO
# Enter the project directory (assuming you're already in this directory)
cd TensorRT-YOLO
# Switch to the export branch
git checkout export
# Build and install
pip install build
python -m build
pip install dist/*.whl
Basic Usage
After installation, you can use the conversion functionality through the trtyolo-export command-line tool:
# View installed version
trtyolo-export --version
# View command help information
trtyolo-export --help
# Convert a basic ONNX model
trtyolo-export -i model.onnx -o output/model-trtyolo.onnx
If you need to query the installed version from Python:
python -c "import trtyolo_export; print(trtyolo_export.__version__)"
๐ ๏ธ Parameter Description
The trtyolo-export command supports the following parameters:
| Parameter | Description | Default Value | Applicable Scenarios |
|---|---|---|---|
--version | Show the installed package version and exit | - | Confirm the CLI version in the current environment |
--verbose, --quiet | Show or hide conversion progress logs | --verbose | Control CLI logging verbosity |
-i, --input | Source ONNX file path | - | Required, input must be an existing ONNX file |
-o, --output | Converted ONNX output path | - | Required, output path must end with .onnx; if it matches the input path, -trtyolo is appended automatically |
--opset | Target ONNX opset version | Preserve source opset | Convert the converted model to a specific opset before saving |
--max-dets | Maximum detections | 100 | Control the output size when appending TensorRT NMS plugins |
--conf-thres | Confidence threshold | 0.25 | Used by plugin-based and NMS-free postprocess outputs |
--iou-thres | IoU threshold | 0.45 | Used when appending TensorRT NMS plugins |
-s, --simplify | Run onnxslim after conversion | False | Slim the converted ONNX model after graph conversion |
Note
The input to trtyolo-export must already be an exported ONNX model. This tool converts ONNX graphs and postprocess outputs; it does not export directly from .pt, .pth, .pdmodel, or .pdiparams.
If -o/--output points to the same file as -i/--input, the tool will emit a warning and automatically rename the output to *-trtyolo.onnx to avoid overwriting the source model.
Official repositories such as YOLOv6, YOLOv7, and YOLOv9 already provide ONNX export functionality with EfficientNMS plugins. If the official output already matches your deployment requirement, an additional conversion step may be unnecessary.
๐ Usage Examples
Conversion Examples
# Convert a basic ONNX model
trtyolo-export -i yolov8s.onnx -o output/yolov8s-trtyolo.onnx
# Convert with a target opset
trtyolo-export -i yolov10s.onnx -o output/yolov10s-trtyolo.onnx --opset 12
# Tune TensorRT NMS plugin parameters
trtyolo-export -i yolo11n-obb.onnx -o output/yolo11n-obb-trtyolo.onnx --max-dets 100 --iou-thres 0.45 --conf-thres 0.25
# Simplify the converted ONNX with onnxslim
trtyolo-export -i yolo12n-seg.onnx -o output/yolo12n-seg-trtyolo.onnx -s
# Convert quietly
trtyolo-export --quiet -i model.onnx -o output/model-trtyolo.onnx
# Avoid overwriting the source file
# If -o and -i are the same path, the actual output becomes yolo11n-pose-trtyolo.onnx
trtyolo-export -i yolo11n-pose.onnx -o yolo11n-pose.onnx
TensorRT Engine Construction
The converted ONNX model can be further built into a TensorRT engine using the trtexec tool for optimal inference performance:
# Static batch
trtexec --onnx=model.onnx --saveEngine=model.engine --fp16
# Dynamic batch
trtexec --onnx=model.onnx --saveEngine=model.engine --minShapes=images:1x3x640x640 --optShapes=images:4x3x640x640 --maxShapes=images:8x3x640x640 --fp16
# ! Note: For segmentation, pose estimation, and OBB models, you need to specify staticPlugins and setPluginsToSerialize parameters to ensure correct loading of custom plugins compiled by the project
# YOLOv8-OBB static batch
trtexec --onnx=yolov8n-obb.onnx --saveEngine=yolov8n-obb.engine --fp16 --staticPlugins=/your/tensorrt-yolo/install/dir/lib/libcustom_plugins.so --setPluginsToSerialize=/your/tensorrt-yolo/install/dir/lib/libcustom_plugins.so
# YOLO11-OBB dynamic batch
trtexec --onnx=yolo11n-obb.onnx --saveEngine=yolo11n-obb.engine --fp16 --minShapes=images:1x3x640x640 --optShapes=images:4x3x640x640 --maxShapes=images:8x3x640x640 --staticPlugins=/your/tensorrt-yolo/install/dir/lib/custom_plugins.dll --setPluginsToSerialize=/your/tensorrt-yolo/install/dir/lib/custom_plugins.dll
๐ Conversion Structure
The converted ONNX model structure is optimized for TensorRT inference and integrates corresponding plugins (official and custom). The model structures for different task types are as follows:
๐ฅ๏ธ Model Support List
| YOLO Series | Source Repo | Detect | Segment | Classify | Pose | OBB |
|---|---|---|---|---|---|---|
| YOLOv3 | ultralytics/yolov3 | โ | โ | โ | - | - |
| YOLOv3 | ultralytics/ultralytics | โ | - | - | - | - |
| YOLOv5 | ultralytics/yolov5 | โ | โ | โ | - | - |
| YOLOv5 | ultralytics/ultralytics | โ | - | - | - | - |
| YOLOv6 | meituan/YOLOv6 | ๐ข | - | - | - | - |
| YOLOv6 | ultralytics/ultralytics | โ | - | - | - | - |
| YOLOv7 | WongKinYiu/yolov7 | ๐ข | - | - | - | - |
| YOLOv8 | ultralytics/ultralytics | โ | โ | โ | โ | โ |
| YOLOv9 | WongKinYiu/yolov9 | ๐ข | โ | - | - | - |
| YOLOv9 | ultralytics/ultralytics | โ | โ | - | - | - |
| YOLOv10 | THU-MIG/yolov10 | โ | - | - | - | - |
| YOLOv10 | ultralytics/ultralytics | โ | - | - | - | - |
| YOLO11 | ultralytics/ultralytics | โ | โ | โ | โ | โ |
| YOLO12 | sunsmarterjie/yolov12 | โ | โ | โ | - | - |
| YOLO12 | ultralytics/ultralytics | โ | โ | โ | โ | โ |
| YOLO13 | iMoonLab/yolov13 | โ | - | - | - | - |
| YOLO26 | ultralytics/ultralytics | โ | โ | โ | โ | โ |
| YOLO-World | ultralytics/ultralytics | โ | - | - | - | - |
| YOLOE | THU-MIG/yoloe | โ | โ | - | - | - |
| YOLOE | ultralytics/ultralytics | โ | โ | - | - | - |
| YOLO-Master | isLinXu/YOLO-Master | โ | โ | โ | - | - |
Symbol Explanation:
โmeanstrtyolo-exportcan convert and can inference |๐ขmeans the upstream or repository export path can be used directly for inference |-means this task is not provided |โmeans not supported
โ Frequently Asked Questions
1. Why do some models require referring to official export tutorials?
Official repositories for models like YOLOv6, YOLOv7, and YOLOv9 already provide ONNX export functionality with EfficientNMS plugins. If those official ONNX files already satisfy your deployment requirements, you may not need an extra conversion step.
2. Which conversion parameters should be adjusted first?
--max-detslimits the number of final detections produced by appended TensorRT NMS plugins--conf-thresfilters low-confidence predictions in plugin-based and NMS-free outputs--iou-threscontrols overlap suppression when TensorRT NMS plugins are appended-s, --simplifyrunsonnxslim; if your downstream toolchain is sensitive, retry without it--opsetis only needed when your downstream runtime requires a specific ONNX opset version
3. What to do if errors are encountered during the conversion process?
- Ensure that the correct version of dependent libraries is installed in your environment
- Check that the input ONNX file exists and the output path ends with
.onnx - Confirm that the exported ONNX graph you are using is in the support list
- If opset conversion fails, retry without
--opsetor choose a compatible version - For models with custom graph modifications, ensure the exported ONNX structure still matches one of the supported conversion patterns