Self-Supervised Learning for Semantic Segmentation of Pol-SAR Images via SAR-to-Optical Transcoding Using Pytorch
March 16, 2026 · View on GitHub
- This framework was developed during my Master's thesis for the degree in Information and Communication Engineering at @UniTN - Course Description.
- For an in-depth look at the motivations, design choices, and experiments, you can check out my Final Dissertation.
- For a brief overview, please refer to my Final Presentation.
- This research was published in the proceedings of an international conference: Transcoding-based pre-training of semantic segmentation networks for PolSAR images (EUSAR 2022).
- Below, I explain how to reuse the code or replicate the experiments performed.
- If you find this work helpful or interesting, please leave a comment and let me know! If you have any questions or curiosities, do not hesitate to contact me.
Visual Results
Transcoding Results
Transcoding comparison between three randomly sampled areas and the three type of transcoders implemented
Classification Results (% states for amount of labelled data employed)
Classification results comparison using different pretrained models and different amount of labelled data
How to use this code
The repo is structured as follows:
.
├── Data
│ ├── Datasets
│ ├── Test ⮕ Here store the patches prepared accordingly to Lib/Dataset/EUSAR/
│ ├── Train ⮕ For train and test sets
├── Docker ⮕ Here is store the docker file configuration
├── Docs
│ ├── arch ⮕ Here there are some architecture images
│ └── manuscript ⮕ Here there is the final manuscript
├── Lib ⮕ Here there are all the code resources
│ ├── Datasets
│ │ ├── EUSAR ⮕ Dataset Pytorch class overload
│ │ ├── processing ⮕ Dataset preprocessing
│ │ └── runner ⮕ Some runners to perform dataset processing
│ ├── Nets ⮕ Here there is the implementation for each network deployed in this framework
│ │ ├── BL ⮕ Fully supervised framework used as benchmark
│ │ ├── CAT ⮕ Supervised Conditional Adversarial Transcoder
│ │ ├── Cycle_AT ⮕ Unsupervised Cycle-Consistent Adversarial Transcoder
│ │ ├── RT ⮕ Supervised Regressive Transcoder
│ │ ├── SN ⮕ Segmentation Network to perform semantic segmentation using the features
learning during the transcoding phase.
├── eval.py
├── mainBL.py ⮕ Main file to train the Baseline
├── mainCAT.py ⮕ Main file to train the Conditional Adversarial Transcoder
├── mainCycle_AT.py ⮕ Main file to train the Cycle-Consistent Adversarial Transcoder
├── mainRT.py ⮕ Main file to train the Regressive Transcoder
├── mainSN.py ⮕ Main file to train the Segmentation Network
├── readme.md
Getting started
- Create the dataset. This file EUSARDataset.py implements a Pytorch Dataset. To work with it the data has to be stored as specified in the file in the folders of Data.
- The docker folder stores the file employed to built my docker image which is public myDocker. It includes the pytorch docker image with some additional library and setting.
- Docs store the final report of this work, so for any doubt refer to it, you can find almost everything.
- In Lib there are all the libraries used to perform the network's operations.
- Once you have prepared the repo, the dataset, and the docker image you can run the main files.
Prepare the Dataset
The data employed in my work was coupled radar and optical images:
- Radar images was dual polarized C-Band Sentinel-1 products
- Optical images was RGB+NIR Sentinel-2 images
- The labelled set was composed of
- Forests.
- Streets.
- Fields.
- Urban.
- Water.
All the images employed were 10x10 meters resolution. You can follow the instruction EUSARDataset.py and create a dataset compliant with my EUSARDataset class or recreate your own, In the former you have 100% compatibility, in the latter you could encounter some problems.
Prerequisites
All the training of the networks implemented have been performed on an Nvidia GeForce RTX 2070 SUPER with 8GB of dedicated memory. The code requires at least 8 GB of free GPU and 8 GB of free RAM. In the report you can find approximately the running times.
Prepare the Machine
To prepare the docker you can run this command:
docker create --name [yourPreferredName] --gpus all --shm-size 8G -it -v [folder/Path/To/Project]:[Folder/In/The/Docker] cattale/pytorch:latest
Between square brackets are parameters you can change:
- [yourPreferredName] choose a name for your container (here you should clone the project)
- [folder/Path/To/Project] the folder in which you store your project
- [Folder/In/The/Docker] folder in the docker container where you will run your code
Configure the Test
Now you need to configure the scripts to run the test you want to perform. The parameters are configured as follows when a script is launched:
- The general_parser.py defines all the configurable parameters. So refer to it for a detailed list
- After the parsing of the argument passed to the script is possible to modify them in a mask in the mains using the specific_parser.py. This script basically overwrite the argument which are specified, it is useful when a lot of parameters change over tests.
- Lastly the config_routine.py is run. This script configures the environment based on the parameters defined.
Run the Code
To run the script follow these instructions.
- Start your docker container using the command
docker container start [container_name] - Then enter in your container using the command
docker container attach [container_name] - Now navigate in the container up to the project folder and run one of the provided mains.
- mainBL.py
- mainRT.py
- mainCAT.py
- mainCycle_AT.py
- mainSN.py
- To run the script above run the command
python main*.py
Acknowledgments
Last but not least this implementation is based on the work of Zhu et al. 2017a.