README.md

December 6, 2025 · View on GitHub

English | 简体中文

RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions

Lingdong Kong1,2   Shaoyuan Xie3   Hanjiang Hu4   Lai Xing Ng2,5   Benoit R. Cottereau2,6   Wei Tsang Ooi1,2
1National University of Singapore    2CNRS@CREATE    3University of California, Irvine    4Carnegie Mellon University    5Institute for Infocomm Research, A*STAR    6CNRS

About

RoboDepth is a comprehensive evaluation benchmark designed for probing the robustness of monocular depth estimation algorithms. It includes 18 common corruption types, ranging from weather and lighting conditions, sensor failure and movement, and noises during data processing.

:books: Citation

If you find this work helpful, please kindly consider citing our papers:

@inproceedings{kong2023robodepth,
    title     = {{RoboDepth}: Robust Out-of-Distribution Depth Estimation under Corruptions},
    author    = {Kong, Lingdong and Xie, Shaoyuan and Hu, Hanjiang and Ng, Lai Xing and Cottereau, Benoit R. and Ooi, Wei Tsang},
    booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
    volume    = {36},
    pages     = {21298-21342},
    year      = {2023}
}
@article{kong2023robodepth_challenge,
    title     = {The {RoboDepth} Challenge: Methods and Advancements Towards Robust Depth Estimation},
    author    = {Kong, Lingdong and Niu, Yaru and Xie, Shaoyuan and Hu, Hanjiang and Ng, Lai Xing and Cottereau, Benoit and Zhao, Ding and Zhang, Liangjun and Wang, Hesheng and Ooi, Wei Tsang and Zhu, Ruijie and Song, Ziyang and Liu, Li and Zhang, Tianzhu and Yu, Jun and Jing, Mohan and Li, Pengwei and Qi, Xiaohua and Jin, Cheng and Chen, Yingfeng and Hou, Jie and Zhang, Jie and Kan, Zhen and Lin, Qiang and Peng, Liang and Li, Minglei and Xu, Di and Yang, Changpeng and Yao, Yuanqi and Wu, Gang and Kuai, Jian and Liu, Xianming and Jiang, Junjun and Huang, Jiamian and Li, Baojun and Chen, Jiale and Zhang, Shuang and Ao, Sun and Li, Zhenyu and Chen, Runze and Luo, Haiyong and Zhao, Fang and Yu, Jingze},
    journal   = {arXiv preprint arXiv:2307.15061}, 
    year      = {2023}
}
@misc{kong2023robodepth_benchmark,
    title     = {The {RoboDepth} Benchmark for Robust Out-of-Distribution Depth Estimation under Corruptions},
    author    = {Kong, Lingdong and Xie, Shaoyuan and Hu, Hanjiang and Cottereau, Benoit and Ng, Lai Xing and Ooi, Wei Tsang},
    howpublished = {\url{https://github.com/ldkong1205/RoboDepth}}, 
    year      = {2023}
}

Updates

  • [2024.01] - The toolkit tailored for the RoboDrive Challenge has been released. :hammer_and_wrench:
  • [2023.12] - We are hosting the RoboDrive Challenge at ICRA 2024. :blue_car:
  • [2023.09] - RoboDepth was accepted to NeurIPS 2023 Track on Datasets and Benchmarks! :tada:
  • [2023.08] - We support robust depth estimation on real-world scenarios, including nuScenes, nuScenes-Night, Cityscapes, and Foggy-Cityscapes. See here for more details.
  • [2023.08] - We establish the nuScenes-C benchmark for robust multi-view depth estimation. See here for more details.
  • [2023.07] - The technical report of the RoboDepth Challenge is available on arXiv.
  • [2023.06] - We have successfully concluded the RoboDepth Challenge! Key statistics of this competition: 226 teams registered at CodaLab, 66 of which made a total number of 1137 valid submissions. More details are included in these slides. We thank the exceptional support from our participants! :heart:
  • [2023.06] - We are glad to announce the winning teams of this competition:
    • Track 1: :1st_place_medal: OpenSpaceAI, :2nd_place_medal: USTC-IAT-United, :3rd_place_medal: YYQ.
    • Track 2: :1st_place_medal: USTCxNetEaseFuxi, :2nd_place_medal: OpenSpaceAI, :3rd_place_medal: GANCV.
    • Innovation Prize: :medal_military: Scent-Depth, :medal_military: Ensemble, :medal_military: AIIA-RDepth.
  • [2023.06] - The video recordings of the RoboDepth Workshop are out. Know more details about how our participants were dedicated to improving the robustness of depth estimation models. :movie_camera:
  • [2023.05] - Glad to announce that the RoboDepth Challenge will be sponsored by Baidu Research. :beers:
  • [2023.01] - The NYUDepth2-C dataset is ready to be downloaded! See here for more details.
  • [2023.01] - Evaluation server for Track 2 (fully-supervised depth estimation) is available on this page.
  • [2023.01] - Evaluation server for Track 1 (self-supervised depth estimation) is available on this page.
  • [2022.11] - We are organizing the RoboDepth Challenge at ICRA 2023. Join the challenge today! :raising_hand:
  • [2022.11] - The KITTI-C dataset is ready to be downloaded! See here for more details.

Outline

Installation

Kindly refer to INSTALL.md for the installation details.

Data Preparation

Our datasets are hosted by OpenDataLab.


OpenDataLab is a pioneering open data platform for the large AI model era, making datasets accessible. By using OpenDataLab, researchers can obtain free formatted datasets in various fields.

The RoboDepth Benchmark

Kindly refer to DATA_PREPARE.md for the details to prepare the 1KITTI, 2KITTI-C, 3NYUDepth2, 4NYUDepth2-C, 5Cityscapes, 6Foggy-Cityscapes, 7nuScenes, and 8nuScenes-C, datasets.

Competition @ ICRA 2023

Kindly refer to this page for the details to prepare the training and evaluation data associated with the 1st RoboDepth Competition at the 40th IEEE Conference on Robotics and Automation (ICRA 2023).

Getting Started

Kindly refer to GET_STARTED.md to learn more usage about this codebase.

Model Zoo

:oncoming_automobile: - Outdoor Depth Estimation

 Self-Supervised Depth Estimation
 Self-Supervised Multi-View Depth Estimation
 Fully-Supervised Depth Estimation
 Semi-Supervised Depth Estimation

:house: - Indoor Depth Estimation

 Self-Supervised Depth Estimation
 Fully-Supervised Depth Estimation
 Semi-Supervised Depth Estimation

Benchmark

:bar_chart: Metrics: The following metrics are consistently used in our benchmark:

  • Absolute Relative Difference (the lower the better): Abs Rel = 1Dpred Dgt  predgt\text{Abs Rel} = \frac{1}{|D|}\sum_{pred\in D}\frac{|gt - pred|}{gt} .

  • Accuracy (the higher the better): δt = 1D{ predD  max(gtpred, predgt)<1.25t} × 100\delta_t = \frac{1}{|D|}|\{\ pred\in D | \max{(\frac{gt}{pred}, \frac{pred}{gt})< 1.25^t}\}| \times 100\\% .

  • Depth Estimation Error (the lower the better):

    • DEE1 = Abs Rel  δ1 + 1\text{DEE}_1 = \text{Abs Rel} - \delta_1 + 1 ;
    • DEE2 = Abs Rel  δ1 + 12\text{DEE}_2 = \frac{\text{Abs Rel} - \delta_1 + 1}{2} ;
    • DEE3 = Abs Relδ1\text{DEE}_3 = \frac{\text{Abs Rel}}{\delta_1} .
  • The second Depth Estimation Error term (DEE2\text{DEE}_2) is adopted as the main indicator for evaluating model performance in our RoboDepth benchmark. The following two metrics are adopted to compare between models' robustness:

    • mCE (the lower the better): The average corruption error (in percentage) of a candidate model compared to the baseline, which is calculated among all corruption types across five severity levels.
    • mRR (the higher the better): The average resilience rate (in percentage) of a candidate model compared to its "clean" performance, which is calculated among all corruption types across five severity levels.

:gear: Notation: Symbol :star: denotes the baseline model adopted in mCE calculation.

KITTI-C

ModelModalitymCE (%)mRR (%)CleanBrightDarkFogFrostSnowContrastDefocusGlassMotionZoomElasticQuantGaussianImpulseShotISOPixelateJPEG
MonoDepth2R18:star:Mono100.0084.460.1190.1300.2800.1550.2770.5110.1870.2440.2420.2160.2010.1290.1930.3840.3890.3400.3880.1450.196
MonoDepth2R18+noptMono119.7582.500.1440.1830.3430.3110.3120.3990.4160.2540.2320.1990.2070.1480.2120.4410.4520.4020.4530.1530.171
MonoDepth2R18+HRMono106.0682.440.1140.1290.3760.1550.2710.5820.2140.3930.2570.2300.2320.1230.2150.3260.3520.3170.3440.1380.198
MonoDepth2R50Mono113.4380.590.1170.1270.2940.1550.2870.4920.2330.4270.3920.2770.2080.1300.1980.4090.4030.3680.4250.1550.211
MaskOccMono104.0582.970.1170.1300.2850.1540.2830.4920.2000.3180.2950.2280.2010.1290.1840.4030.4100.3640.4170.1430.177
DNetR18Mono104.7183.340.1180.1280.2640.1560.3170.5040.2090.3480.3200.2420.2150.1310.1890.3620.3660.3260.3570.1450.190
CADepthMono110.1180.070.1080.1210.3000.1420.3240.5290.1930.3560.3470.2850.2080.1210.1920.4230.4330.3830.4480.1440.195
HR-DepthMono103.7382.930.1120.1210.2890.1510.2790.4810.2130.3560.3000.2630.2240.1240.1870.3630.3730.3360.3740.1350.176
DIFFNetHRNetMono94.9685.410.1020.1110.2220.1310.1990.3520.1610.5130.3300.2800.1970.1140.1650.2920.2660.2550.2700.1350.202
ManyDepthsingleMono105.4183.110.1230.1350.2740.1690.2880.4790.2270.2540.2790.2110.1940.1340.1890.4300.4500.3870.4520.1470.182
FSRE-DepthMono99.0583.860.1090.1280.2610.1390.2370.3930.1700.2910.2730.2140.1850.1190.1790.4000.4140.3700.4070.1470.224
MonoViTMPViTMono79.3389.150.0990.1060.2430.1160.2130.2750.1190.1800.2040.1630.1790.1180.1460.3100.2930.2710.2900.1620.154
MonoViTMPViT+HRMono74.9589.720.0940.1020.2380.1140.2250.2690.1170.1450.1710.1450.1840.1080.1450.3020.2770.2590.2850.1350.148
DynaDepthR18Mono110.3881.500.1170.1280.2890.1560.2890.5090.2080.5010.3470.3050.2070.1270.1860.3790.3790.3360.3790.1410.180
DynaDepthR50Mono119.9977.980.1130.1280.2980.1520.3240.5490.2010.5320.4540.3180.2180.1250.1970.4180.4370.3820.4480.1530.216
RA-DepthHRNetMono112.7378.790.0960.1130.3140.1270.2390.4130.1650.4990.3680.3780.2140.1220.1780.4230.4030.4020.4550.1750.192
TriDepthsingleMono109.2681.560.1170.1310.3000.1880.3380.4980.2650.2680.3010.2120.1900.1260.1990.4180.4380.3800.4380.1420.205
Lite-MonoTinyMono92.9286.690.1150.1270.2570.1570.2250.3540.1910.2570.2480.1980.1860.1270.1590.3580.3420.3360.3600.1470.161
Lite-MonoTiny+HRMono86.7187.630.1060.1190.2270.1390.2820.3700.1660.2160.2010.1900.2020.1160.1460.3200.2910.2860.3120.1480.167
Lite-MonoSmallMono100.3484.670.1150.1270.2510.1620.2510.4300.2380.3530.2820.2460.2040.1280.1610.3500.3360.3190.3560.1540.164
Lite-MonoSmall+HRMono89.9086.050.1050.1190.2630.1390.2630.4360.1670.1880.1810.1930.2140.1170.1470.3660.3540.3270.3550.1520.157
Lite-MonoBaseMono93.1685.990.1100.1190.2590.1440.2450.3840.1770.2240.2370.2210.1960.1290.1750.3610.3400.3340.3630.1510.165
Lite-MonoBase+HRMono89.8585.800.1030.1150.2560.1350.2580.4860.1640.2200.1940.2130.2050.1140.1540.3400.3270.3210.3440.1450.156
Lite-MonoLargeMono90.7585.540.1020.1100.2270.1260.2550.4330.1490.2220.2250.2200.1920.1210.1480.3630.3480.3290.3620.1600.184
Lite-MonoLarge+HRMono92.0183.900.0960.1120.2410.1220.2800.4820.1410.1930.1940.2130.2220.1080.1400.4030.4040.3650.4070.1390.182
MonoDepth2R18Stereo117.6979.050.1230.1330.3480.1610.3050.5150.2340.3900.3320.2640.2090.1350.2000.4920.5090.4630.4930.1440.194
MonoDepth2R18+noptStereo128.9879.200.1500.1810.4220.2920.3520.4350.3420.2660.2320.2170.2290.1560.2360.5390.5640.5210.5560.1640.178
MonoDepth2R18+HRStereo111.4681.650.1170.1320.2850.1670.3560.5290.2380.4320.3120.2790.2460.1300.2060.3430.3430.3220.3440.1500.209
DepthHintsStereo111.4180.080.1130.1240.3100.1370.3210.5150.1640.3500.4100.2630.1960.1300.1920.4400.4470.4120.4550.1570.192
DepthHintsHRStereo112.0279.530.1040.1220.2820.1410.3170.4800.1800.4590.3630.3200.2620.1180.1830.3970.4210.3800.4240.1410.183
DepthHintsHR+noptStereo141.6173.180.1340.1730.4760.3010.3740.4630.3930.3570.2890.2410.2310.1420.2470.6130.6580.5990.6920.1520.191
MonoDepth2R18M+S124.3175.360.1160.1270.4040.1500.2950.5360.1990.4470.3460.2830.2040.1280.2030.5770.6050.5610.6290.1360.179
MonoDepth2R18+noptM+S136.2576.720.1460.1930.4600.3280.4210.4280.4400.2280.2210.2160.2300.1530.2290.5700.5960.5490.6060.1610.177
MonoDepth2R18+HRM+S106.0682.440.1140.1290.3760.1550.2710.5820.2140.3930.2570.2300.2320.1230.2150.3260.3520.3170.3440.1380.198
CADepthM+S118.2976.680.1100.1230.3570.1370.3110.5560.1690.3380.4120.2600.1930.1260.1860.5460.5590.5240.5820.1450.192
MonoViTMPViTM+S75.3990.390.0980.1040.2450.1220.2130.2150.1310.1790.1840.1610.1680.1120.1470.2770.2570.2420.2600.1470.144
MonoViTMPViT+HRM+S70.7990.670.0900.0970.2210.1130.2170.2530.1130.1460.1590.1440.1750.0980.1380.2670.2460.2360.2460.1350.145

NYUDepth2-C

ModelmCE (%)mRR (%)CleanBrightDarkContrastDefocusGlassMotionZoomElasticQuantGaussianImpulseShotISOPixelateJPEG
BTSR50122.7880.630.1220.1490.2690.2650.3370.2620.2310.3720.1820.1800.4420.5120.3920.4740.1390.175
AdaBinsR50134.6981.620.1580.1790.2930.2890.3390.2800.2450.3900.2040.2160.4580.5190.4010.4810.1860.211
AdaBinsEfficientB5:star:100.0085.830.1120.1320.1940.2120.2350.2060.1840.3840.1530.1510.3900.3740.2940.3800.1240.154
DPTViT-B83.2295.250.1360.1350.1820.1800.1540.1660.1550.2320.1390.1650.2000.2130.1910.1990.1710.174
SimIPUR50+no_pt200.1792.520.3720.3880.4270.4480.4160.4010.4000.4330.3810.3910.4650.4710.4500.4610.3750.378
SimIPUR50+imagenet163.0685.010.2440.2690.3700.3760.3770.3370.3240.4220.3060.2890.4450.4630.4140.4490.2470.272
SimIPUR50+kitti173.7891.640.3120.3260.3730.4060.3600.3330.3350.3860.3160.3330.4320.4420.4220.4430.3140.322
SimIPUR50+waymo159.4685.730.2430.2690.3480.3980.3800.3270.3130.4050.2560.2870.4390.4610.4160.4550.2460.265
DepthFormerSwinT_w7_1k106.3487.250.1250.1470.2790.2350.2200.2600.1910.3000.1750.1920.2940.3210.2890.3050.1610.179
DepthFormerSwinT_w7_22k63.4794.190.0860.0990.1500.1230.1270.1720.1190.2370.1120.1190.1590.1560.1480.1570.1010.108

Idiosyncrasy Analysis

For more detailed benchmarking results and to access the pretrained weights used in robustness evaluation, kindly refer to RESULT.md.

Create Corruption Sets

You can manage to create your own "RoboDepth" corruption sets! Follow the instructions listed in CREATE.md.

TODO List

  • Initial release. 🚀
  • Add scripts for creating common corruptions.
  • Add download link of KITTI-C and NYUDepth2-C.
  • Add competition data.
  • Add benchmarking results.
  • Add evaluation scripts on corruption sets.

License

Creative Commons License
This work is under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

We thank Baidu Research for the support towards the RoboDepth Challenge.


Acknowledgements

This project is supported by DesCartes, a CNRS@CREATE program on Intelligent Modeling for Decision-Making in Critical Urban Systems.