LLDiffusion icon indicating copy to clipboard operation
LLDiffusion copied to clipboard

The code of paper "LLDiffusion: Learning Degradation Representations in Diffusion Models for Low-Light Image Enhancement", PR 2025

LLDiffusion: Learning Degradation Representations in Diffusion Models for Low-Light Image Enhancement (PR 2025)

Tao Wang, Kaihao Zhang, Yong Zhang, Wenhan Luo, Bjorn Stenger, Tong Lu, Tae-Kyun Kim, Wei Liu

paper visitors

News

  • [2025/7/14] 🔥 Release our Real-world Tesing dataset for real-world LLIE!
  • [2025/7/14] 🔥 Release the Pre trained models!
  • [2025/7/14] 🔥 Release the codes!
  • [2025/3/21] 🔥 Paper accepted at PR 2025
  • [2023/7/27] 🔥 Paper on Arxiv

Abstract: Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data. However, these methods often overlook the importance of considering degradation representations, which can lead to sub-optimal outcomes. In this paper, we address this limitation by proposing a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process, resulting in improved image enhancement. Our proposed degradation-aware learning scheme is based on the understanding that degradation representations play a crucial role in accurately modeling and capturing the specific degradation patterns present in low-light images. To this end, First, a joint learning framework for both image generation and image enhancement is presented to learn the degradation representations. Second, to leverage the learned degradation representations, we develop a Low-Light Diffusion model (LLDiffusion) with a well-designed dynamic diffusion module. This module takes into account both the color map and the latent degradation representations to guide the diffusion process. By incorporating these conditioning factors, the proposed LLDiffusion can effectively enhance low-light images, considering both the inherent degradation patterns and the desired color fidelity. Finally, we evaluate our proposed method on several well-known benchmark datasets, including synthetic and real-world unpaired datasets. Extensive experiments on public benchmarks demonstrate that our LLDiffusion outperforms state-of-the-art LLIE methods both quantitatively and qualitatively.


Pipeline

:wrench: Dependencies and Installation

  1. The code requires python>=3.7, as well as pytorch==2.0.1 and torchvision==0.15.2. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
conda create -n LLDiffusion python=3.10.6
conda activate LLDiffusion
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
pip install -r requirements.txt
  1. Download the LOL, LOL-v2 dataset

  2. Extract file into folder data/sets/lol data/sets/lol-v2-syn data/sets/lol-v2-real

  3. Download the checkpoints from our Link.

  4. Download our proposed real-world LLIE dataset from our Link.

  5. Clone Repo

git clone https://github.com/TaoWangzj/LLDiffusion.git

Train

  1. Train on the LOL dataset
python train.py --config configs/deg-unet-lol.yml
  1. Train on LOL-v2-real dataset
python train.py --config configs/deg-unet-lol-v2-real.yml
  1. Train on LOL-v2-synthetic dataset
python train.py --config configs/deg-unet-lol-v2-sys.yml

Test

  1. Test on the LOL dataset
python test.py --config configs/deg-unet.yml --resume checkpoints/LOL/best-355-23.32.pth 
  1. Test on the LOL-v2-real dataset
python test.py --config configs/deg-unet-lol-v2-real.yml --resume checkpoints/lol-v2-real/best-5499-24.10.pth
  1. Test on the LOL-v2-synthetic dataset
python test.py --config configs/deg-unet-lol-v2-syn.yml --resume checkpoints/lol-v2-syn/best-3999-25.99.pth

Results

Experiments are performed for different LLIE datasets, including LOL, VE-LOL, LOL-v2, and real-world LLIE datasets.

LOL dataset (click to expand)
LOL-v2 dataset (click to expand)
VE-LOL (click to expand)
Real-world LLIE dataset (click to expand)

Reference Repositories

This implementation is based on / inspired by:

  • Restormer: https://github.com/swz30/Restormer
  • WeatherDiffusion: https://github.com/IGITUGraz/WeatherDiffusion
  • LLFormer: https://github.com/TaoWangzj/LLFormer

Citations

If our work helps your research or work, please consider citing:

@article{wang2025lldiffusion,
  title={LLDiffusion: Learning degradation representations in diffusion models for low-light image enhancement},
  author={Wang, Tao and Zhang, Kaihao and Zhang, Yong and Luo, Wenhan and Stenger, Bj{\"o}rn and Lu, Tong and Kim, Tae-Kyun and Liu, Wei},
  journal={Pattern Recognition},
  volume={166},
  pages={111628},
  year={2025},
  publisher={Elsevier}
}