README.md

May 7, 2026 · View on GitHub

trackers logo

trackers

Plug-and-play multi-object tracking for any detection model.

version downloads license python-version colab discord

Keeping track of objects across video frames is one of those problems that sounds simple until you try it — occlusions, fast motion, similar-looking targets, and moving cameras all conspire against you. trackers gives you clean, benchmarked implementations of SORT, ByteTrack, OC-SORT, and BoT-SORT so you can skip the plumbing and focus on your application. It speaks supervision.Detections natively, which means it slots into any detector you already use — YOLO, DETR, RT-DETR, or anything else — without glue code. Whether you are a researcher comparing algorithms, an engineer shipping a production pipeline, or a hobbyist building something cool, trackers gives you a single consistent interface for all of them. Requires Python ≥ 3.10.

Why trackers?

  • Clean-room implementations. Every algorithm is re-implemented from the original paper — not a thin wrapper around someone else's code. You can read it, understand it, and modify it.
  • Detector-agnostic. Works with YOLO, DETR, RT-DETR, or any model that produces bounding boxes. No inference library required or assumed.
  • supervision.Detections native. Plugs directly into the supervision ecosystem. Pass detections in, get tracked detections back — zero glue code.
  • Benchmarked across four datasets. MOT17, SportsMOT, SoccerNet, and DanceTrack — at default parameters and after hyperparameter tuning, so you know what to expect before you deploy.
  • Tunable out of the box. Built-in Optuna-based hyperparameter search via trackers tune so you can optimize for your specific scene and detector.
  • Camera motion compensation. BoT-SORT handles moving cameras natively, keeping track IDs stable even when the whole frame shifts.

Install

pip install trackers
Install from source
pip install git+https://github.com/roboflow/trackers.git

For more options, see the install guide.

Watch: Building Real-Time Multi-Object Tracking with RF-DETR and Trackers

Quick Start

Add tracking to your existing detection pipeline in a few lines. Every tracker shares the same update(detections, image) interface, so switching algorithms later is a one-line change. The example below uses inference as the detector — swap it for any detector that returns supervision.Detections.

import cv2
import supervision as sv
from inference import get_model
from trackers import ByteTrackTracker

model = get_model(model_id="rfdetr-medium")
tracker = ByteTrackTracker()

cap = cv2.VideoCapture("video.mp4")
while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break

    result = model.infer(frame)[0]
    detections = sv.Detections.from_inference(result)
    tracked = tracker.update(detections)

For more examples, see the tracking guide.

Track from CLI

Prefer the terminal? Point trackers track at a video, webcam feed, RTSP stream, or image directory and it handles detection, tracking, and annotated output in one command — no Python script required.

trackers track \
    --source video.mp4 \
    --output output.mp4 \
    --model rfdetr-medium \
    --tracker bytetrack \
    --show-labels \
    --show-trajectories

For all CLI options, see the tracking guide.

Algorithms

Each tracker below is a faithful implementation of its original paper. Pick the one that fits your scene, or run the benchmark to find out which performs best on your data.

AlgorithmDescriptionMOT17 HOTASportsMOT HOTASoccerNet HOTADanceTrack HOTA
SORTKalman filter + Hungarian matching baseline.58.470.981.645.0
ByteTrackTwo-stage association using high and low confidence detections.60.173.084.050.2
OC-SORTObservation-centric recovery for lost tracks.61.971.778.451.8
BoT-SORTCamera motion compensation63.773.884.550.5

All scores use default parameters on the standard split. See the tracker comparison for tuned numbers and methodology.

Evaluate

Once you have tracking results, you want to know how good they are. trackers eval computes CLEAR, HOTA, and Identity metrics against ground-truth annotations and prints a per-sequence breakdown alongside the combined score.

trackers eval \
    --gt-dir ./data/mot17/val \
    --tracker-dir results \
    --metrics CLEAR HOTA Identity \
    --columns MOTA HOTA IDF1
Sequence                        MOTA    HOTA    IDF1
----------------------------------------------------
MOT17-02-FRCNN                30.192  35.475  38.515
MOT17-04-FRCNN                48.912  55.096  61.854
MOT17-05-FRCNN                52.755  45.515  55.705
MOT17-09-FRCNN                51.441  50.108  57.038
MOT17-10-FRCNN                51.832  49.648  55.797
MOT17-11-FRCNN                55.501  49.401  55.061
MOT17-13-FRCNN                60.488  58.651  69.884
----------------------------------------------------
COMBINED                      47.406  50.355  56.600

For the full evaluation workflow, see the evaluation guide.

Download Datasets

Need benchmark data to evaluate against? trackers download pulls MOT17, SportsMOT, and other supported datasets with a single command, handling splits and assets selectively so you only download what you need.

trackers download mot17 \
    --split val \
    --asset annotations,detections
DatasetDescriptionSplitsAssetsLicense
mot17Pedestrian tracking with crowded scenes and frequent occlusions.train, val, testframes, annotations, detectionsCC BY-NC-SA 3.0
sportsmotSports broadcast tracking with fast motion and similar-looking targets.train, val, testframes, annotationsCC BY 4.0

For more download options, see the download guide.

Try It

Want to see it in action before writing any code? Try trackers in your browser with our Hugging Face Playground — no install required.

Where to go next

Contributing

We welcome contributions. Read our contributor guidelines to get started.

License

The code is released under the Apache 2.0 license.