Diffusion Probabilistic Models (DPMs) have recently shown remarkable performance in image generation tasks, capable of generating highly realistic images. When adopting DPMs for image restoration tasks, the crucial aspect lies in how to integrate the conditional information to guide the DPMs to generate accurate and natural output, which has been largely overlooked in existing works.
In this paper, we present a unified conditional framework based on diffusion models for image restoration. We leverage a lightweight UNet to predict initial guidance and the diffusion model to learn the residual of the guidance. By carefully designing the basic module and integration module for the diffusion model block, we integrate the guidance and other auxiliary conditional information into every block of the diffusion model to achieve spatially-adaptive generation conditioning. To handle high-resolution images, we propose a simple yet effective inter-step patch-splitting strategy to produce arbitrary-resolution images without grid artifacts.
We evaluate our conditional framework on three challenging tasks: extreme low-light denoising, deblurring, and JPEG restoration, demonstrating significant improvements in perceptual quality and generalization to restoration tasks.
- ✨ Unified Conditional Framework: A novel approach to integrate conditional information into diffusion models for image restoration
- 🎨 Spatially-Adaptive Conditioning: Guidance and auxiliary conditions integrated into every diffusion block
- 📐 High-Resolution Support: Inter-step patch-splitting strategy for arbitrary-resolution images without grid artifacts
- 🚀 State-of-the-Art Performance: Superior results on extreme low-light denoising, deblurring, and JPEG restoration tasks
# Clone the repository
git clone https://github.com/yourusername/UCDIR.git
cd UCDIR
# Install dependencies
pip install -r requirements.txt
pip install lpips clean-fid
# (Optional) Install BasicSR for additional features
# python setup.py develop --no_cuda_ext# Training instructions coming soon
# Stay tuned for updates!Note: Training code and detailed instructions will be released soon.
- Model: Download the denoising model and place it in
./experiments/sid/checkpoint/ - Dataset: Download the testing dataset and place it in
./dataset/
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch \
--nproc_per_node=8 --master_port=4321 \
python -u sr.py -p val -c config/sid.yaml \
--checkpoint experiments/sid/checkpoint/I_ElatestDownload the SID results from our paper and place them in ./experiments/val_sid-ema-s50/results/
# Calculate PSNR, SSIM, LPIPS, FID, and KID metrics
python -u eval1.py -s experiments/val_sid-ema-s50/resultsOur method achieves state-of-the-art performance on multiple image restoration tasks. For detailed quantitative and qualitative results, please refer to our paper.
1The Chinese University of Hong Kong
2Snap Research
If you find this work useful for your research, please cite:
@article{zhang2023UCDIR,
title={A unified conditional framework for diffusion-based image restoration},
author={Zhang, Yi and Shi, Xiaoyu and Li, Dasong and Wang, Xiaogang and Wang, Jian and Li, Hongsheng},
journal={Advances in Neural Information Processing Systems},
volume={36},
pages={49703--49714},
year={2023}
}For questions and inquiries, please contact: [email protected]
This project is built upon the following excellent works:
- BasicSR - Open source image and video restoration toolbox
- S3 - Image Super-Resolution via Iterative Refinement
⭐ If you find this project helpful, please consider giving it a star! ⭐

