IAN
IAN copied to clipboard
Designing An Illumination-Aware Network for Deep Image Relighting (TIP 2022)
IAN (TIP 2022)
Introduction
This repository is the official implementation of Designing An Illumination-Aware Network for Deep Image Relighting. [Paper] [Demos]
Designing An Illumination-Aware Network for Deep Image Relighting
Zuo-Liang Zhu, Zhen Li, Rui-Xun Zhang, Chun-Le Guo, Ming-Ming Cheng
IEEE Transactions on Image Processing, 2022

Data preparation
Datasets
- VIDIT dataset [Paper] [Download]
- Multi-Illumination dataset [Paper] [Download]
- DPR dataset [Paper] [Download]
Normal generation on the VIDIT dataset
- Place the one2one training data into folders
./data/one2one/train/depth
,./data/one2one/train/input
,./data/one2one/train/target
- Place the any2any training data into folders
./data/any2any/train/depth
(all '.npy' files),./data/any2any/train/input
(all RGB images) - Place the one2one validation data into folders
./data/validation/train/depth
,./data/validation/train/input
,./data/validation/train/target
- Run
gen_train_data.sh
to obtain full training and validation data.
Quick Demo
- Create the environment by
conda env create -f environment.yml
- Download the pretrained model on DPR dataset from the link and place them into the folder 'pretrained'.
- Run
python test.py -opt options/videodemo_opt.yml
. - Image results will be save in the folder
results
. - You can further utilize the
ffmpeg
to generate demo videos asffmpeg -f image2 -i [path_to_results] -vcodec libx264 -r 10 demo.mp4
.
Train
python train.py -opt [training config]
Dataset | Guidance | Config |
---|---|---|
VIDIT | depth, normal, lpe* | options/train_opt4b.yml |
Multi-Illumination | :x: | options/train_adobe_opt.yml |
DPR | normal, lpe | options/trainany_opt4b.yml |
DPR | :x: | options/trainany_opt4b_woaux.yml |
* The `lpe' represents our proposed linear positional encoding.
Test
python test.py -opt [testing config]
Dataset | Guidance | Config | Pretrained |
---|---|---|---|
VIDIT | depth, normal, lpe | options/valid_opt.yml |
pretrained/VIDITOne2One.pth |
Multi-Illumination | :x: | options/valid_adobe_opt.yml |
pretrained/MutliIllumination.pth |
DPR | normal, lpe | options/vaild_any_opt.yml |
pretrained/PortraitWithNormal.pth |
DPR | :x: | options/vaild_any_opt.yml |
pretrained/PortraitWithoutNormal.pth |
You can download all pretrained models from this Google Driver or BaiduNetDisk (pwd: 5qtp).
Citation
@article{zhu2022ian,
author = {Zuo-Liang Zhu, Zhen Li, Rui-Xun Zhang, Chun-Le Guo, Ming-Ming Cheng},
title = {Designing An Illumination-Aware Network for Deep Image Relighting},
journal = {IEEE Transactions on Image Processing},
year = {2022},
doi = {10.1109/TIP.2022.3195366}
}
Acknowledge
- This repository is maintained by Zuo-Liang Zhu (
nkuzhuzl [AT] gmail.com
) and Zhen Li (zhenli1031 [AT] gmail.com
). - Our code is based on a famous restoration toolbox BasicSR.
LICENSE
The code is released under Creative Commons Attribution-NonCommercial 4.0 International for non-commercial use only. Any commercial use should get formal permission first.
References
- AIM 2020: Scene Relighting and Illumination Estimation Challenge [Webpage] [Paper]
- NTIRE 2021 Depth Guided Image Relighting Challenge [Webpage] [Paper]
- Deep Single Portrait Image Relighting [Github] [Paper] [Supp]
- Multi-modal Bifurcated Network for Depth Guided Image Relighting [Github] [Paper]
- Physically Inspired Dense Fusion Networks for Relighting [Paper]
- LPIPS [Github] [Paper]
More demos
https://user-images.githubusercontent.com/50139523/179356102-14bac41f-7caf-409c-b1c7-75eb315ef881.mp4
https://user-images.githubusercontent.com/50139523/179357378-ae446399-02c4-45a8-8223-66cc480d1fc9.mp4
https://user-images.githubusercontent.com/50139523/179356168-dab380d8-b844-45c1-a121-ef64233346d4.mp4
https://user-images.githubusercontent.com/50139523/179356170-69b23de2-911b-45f4-bd19-0e7fbf748feb.mp4