[ICLR-2026] RestoreVAR
March 24, 2026 ยท View on GitHub
Sudarshan Rajagopalan | Kartik Narayan | Vishal M. Patel
Code for RestoreVAR: Visual Autoregressive Generation for All-in-One Image Restoration.
Getting Started
Environment
Create the environment as follows
conda create -n var_test python=3.9 -y
conda activate var_test
pip install -r requirements.txt
Downloads
Download checkpoints file
and extract it to ckpts/.
Download training and
testing
data and extract them to data/ and test_data/, respectively.
Download meta-info json files for testing
and extract them to test_jsons/.
Testing
To get results of Table 1, run
bash test_regular.sh
To get results of Table 2, run
bash test_generalization.sh
bash metric_generalization.sh
The scripts can be modified as needed.
Training
For training the VAR transformer for restoration, run
bash train.sh
The script loads the pretrained ckpts/var_d16.pth model and introduces the proposed components to train the
model for restoration.
Once trained, the latent refiner transformer (LRT) can be trained using
bash train_refiner.sh
Prior to running this command, rename the existing local_output/ directory to something else. The refiner uses
the ckpts/vae_restorevar.ckpt file which contains weights for the VAE decoder fine-tuned to handle continuous
latents.
The above commands use trainset.json and valset.json which contain information about file paths,
datasets, etc. To include your own datasets, make .json files with entries as follows:
{
"image_path": <degraded_image_path>,
"target_path": <target_image_path>,
"degradation": <degradation>,
"dataset": <dataset>
}
Citation
If you found our work useful, please cite:
@inproceedings{
rajagopalan2026restorevar,
title={Restore{VAR}: Visual Autoregressive Generation for All-in-One Image Restoration},
author={Sudarshan Rajagopalan and Kartik Narayan and Vishal M. Patel},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=yvXtCn2zfz}
}
Acknowledgments
Our code uses parts from VAR and VARSR. We thank the authors for sharing their codes!