DIS

December 3, 2025 ยท View on GitHub

DIS (Direct Image Supersampling) is a lightweight image super-resolution architecture optimized for speed and real-time inference. It has support for PyTorch, ONNX, and TensorRT.

This is the inference and ONNX conversion code. To train a model, you'll want to use traiNNer-redux.

Getting Started

  1. Clone the repository:

    git clone https://github.com/Kim2091/DIS
    
  2. Install PyTorch with CUDA: Follow the instructions at pytorch.org.

  3. Install required packages:

    pip install -r requirements.txt
    

Model Variants

VariantParameters (2x)Description
DIS_Balanced~269KBalance of speed and quality
DIS_Fast~195KFastest, recommended

Benchmarks

Configuration: 2x upscale, 720p, FP16 with TensorRT, 2 streams

ModelFPSPSNR (BHI100)SSIM (BHI100)Notes
DIS_Balanced10027.440.898Slightly behind Compact, faster
DIS_Fast13727.270.895On par with ArtCNN R8F48, 2x faster
ArtCNN R8F488627.250.897Reference model
Compact7827.590.90Reference model

Usage

Command-Line Usage

Image upscaling (PyTorch):

python inference.py --input lr.png --output sr.png --scale 4 --fp16

ONNX inference:

python inference.py --input lr.png --output sr.png --model model.onnx --backend onnx

Benchmark:

python inference.py --benchmark --scale 4 --fp16

Tools

Utility scripts are located in the tools/ directory.

Convert PyTorch model to ONNX:

python tools/export_onnx.py --model pretrained_models/model.pth --output model.onnx
  • --dynamic: Create a model that supports various input sizes.
  • --fp16: Convert the model to FP16 for a speed boost.

TensorRT

The easiest way to use this model with TensorRT is through Vapourkit or VideoJaNai.

Alternatively you can convert to TensorRT manually:

# Dynamic shapes
trtexec --onnx=model_fp16.onnx \
    --minShapes=input:1x3x64x64 \
    --optShapes=input:1x3x256x256 \
    --maxShapes=input:1x3x1024x1024 \
    --saveEngine=model_dynamic.engine \
    --fp16