π Introduction
March 5, 2026 Β· View on GitHub
Probabilistic Uncertainty-Guided Salient Object Detection in Remote Sensing Images
β This code has been completely released β
β our article β
π Introduction
Salient object detection holds significant application value in fields such as agricultural monitoring, disaster assessment, and urban planning, providing critical support for precise decision-making. Existing deep learningβbased detection methods often rely on nonlinear mappings to perform binary classification of pixels. However, considerable uncertainty often occurs in areas where objects and backgrounds are alike because of lighting variations, shadow effects, and similar object interference. This uncertainty negatively affects the detection performance, especially at pixels near the decision boundary. To address this issue, a remote sensing salient object-detection method is proposed based on probabilistic uncertainty assessment (uncertainty guided network [UGNet]). First, a multi-scale encoderβdecoder framework with deep supervision is designed for the uncertainty calculation of confusing features. It uses high-level semantic features as guidance to enhance the ability to distinguish confusing features. Then, an uncertainty estimation mapping module is constructed, which uses Gaussian distribution to weight uncertain pixels, thereby improving the semantic distinction in confusing regions. A multi-scale focus fusion module is then introduced to integrate global and local information, reducing the uncertainty of multi-scale confusing features. Finally, multi-scale deep supervision is used to enhance the accuracy of salient object detection. Experimental results on two public data sets, optical remote sensing saliency detection and extended optical remote sensing saliency detection, demonstrate that the proposed UGNet outperforms 18 mainstream methods, with significantly improved detection performance. 
DateSets
ORSSD download at here
EORSSD download at here
The structure of the dataset is as follows:
UGNet
βββ EORSSD
β βββ train
β β βββ images
β β β βββ 0001.jpg
β β β βββ 0002.jpg
β β β βββ .....
β β βββ lables
β β β βββ 0001.png
β β β βββ 0002.png
β β β βββ .....
β β
β βββ test
β β βββ images
β β β βββ 0004.jpg
β β β βββ 0005.jpg
β β β βββ .....
β β βββ lables
β β β βββ 0004.png
β β β βββ 0005.png
β β β βββ .....
Train
-
Download the dataset.
-
Use data_aug.m to augment the training set of the dataset.
-
Download backbone weight at pvt_v2_b2.pth, and put it in './pretrain/'.
-
Modify paths of datasets, then run train_MyNet.py.
Test
- Download the pre-trained models of our network at weight
- Modify paths of pre-trained models and datasets.
- Run test_MyNet.py.
Evaluation Tool
You can use the evaluation tool (MATLAB version) to evaluate the above saliency maps.
ORSI-SOD Summary
Salient Object Detection in Optical Remote Sensing Images Read List at here
Acknowledgements
This code is built on PyTorch.
Contact
If you have any questions, please submit an issue on GitHub or contact me by email (cxh1638843923@gmail.com).