DC-ShadowNet-Hard-and-Soft-Shadow-Removal
DC-ShadowNet-Hard-and-Soft-Shadow-Removal copied to clipboard
[ICCV2021]"DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network", https://arxiv.org/abs/2207.10434
DC-ShadowNet (ICCV'2021)
Introduction
DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network
International Conference on Computer Vision (ICCV'2021)
Yeying Jin, Aashish Sharma and Robby T. Tan
[Paper]
[Supplementary]
[Poster]
[Slides]
[Video]
[Zhihu]
Prerequisites
git clone https://github.com/jinyeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal.git
cd DC-ShadowNet-Hard-and-Soft-Shadow-Removal/
conda create -n shadow python=3.7
conda activate shadow
conda install pytorch=1.10.2 torchvision torchaudio cudatoolkit=11.3 -c pytorch
python3 -m pip install -r requirements.txt
Datasets
-
SRD Train|BaiduPan, Test. Shadow Masks
-
AISTD|ISTD+ [link]
-
ISTD [link]
-
USR: Unpaired Shadow Removal Dataset [link]
-
LRSS: Soft Shadow Dataset [link]
The LRSS dataset contains 134 shadow images (62 pairs of shadow and shadow-free images).
We use 34 pairs for testing and 100 shadow images for training. For shadow-free training images, 28 from LRSS and 72 randomly selected from the USR dataset.Dropbox BaiduPan code:t9c7
Shadow Removal Results: Dropbox | BaiduPan code:gr59
Pre-trained Models and Results
| Dataset | Model Dropbox | Model BaiduPan | Model Put in Path | Results Dropbox | Results BaiduPan |
|---|---|---|---|---|---|
| SRD | Dropbox | BaiduPan code:zhd2 | results/SRD/model/ |
Dropbox | BaiduPan code:28bv |
| AISTD/ISTD+ | Dropbox | BaiduPan code:cfn9 | results/AISTD/model/ |
Dropbox | BaiduPan code:3waf |
| ISTD | Dropbox | BaiduPan code:b8o0 | results/ISTD/model/ |
Dropbox | BaiduPan code:hh4n |
| USR | Dropbox | BaiduPan code:e0a8 | results/USR/model/ |
Dropbox | BaiduPan code:u7ec |
| LRSS | - | - | - | Dropbox | BaiduPan code:bbns |
Test
- [Update main_test_single.py and DCShadowNet_test_single.py]
Put the test images intest_input, results in:results/output/
${DC-ShadowNet-Hard-and-Soft-Shadow-Removal}
|-- test_input ## Shadow
|-- results
|-- output ## result
CUDA_VISIBLE_DEVICES='1' python main_test_single.py
- For the test SRD dataset
/dataset/SRD/testA/, results in:results/SRD/500000(iteration)/outputB/
${DC-ShadowNet-Hard-and-Soft-Shadow-Removal}
|-- dataset
|-- SRD
|-- testA ## Shadow
|-- AISTD
|-- testA ## Shadow
|-- USR
|-- testA ## Shadow
|-- results
|-- SRD
|-- model ## SRD_params_0500000.pt
|-- 500000/outputB/ ## result
|-- AISTD
|-- model ## AISTD_params_0500000.pt
|-- 500000/outputB/ ## result
|-- ISTD
|-- model ## ISTD_params_0600000.pt
|-- 600000/outputB/ ## result
|-- USR
|-- model ## USR_params_0600000.pt
|-- 600000/outputB/ ## result
rename to the original name, please change the suffix of test images accordingly (.jpg or .png)
CUDA_VISIBLE_DEVICES='1' python main_test.py --dataset SRD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/SRD --use_original_name True --im_suf_A .jpg
CUDA_VISIBLE_DEVICES='1' python main_test.py --dataset AISTD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/AISTD --use_original_name True --im_suf_A .png
CUDA_VISIBLE_DEVICES='1' python main_test.py --dataset ISTD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/ISTD --use_original_name True --im_suf_A .png
CUDA_VISIBLE_DEVICES='1' python main_test.py --dataset USR --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/USR --use_original_name True --im_suf_A .jpg
Evaluation
The root mean squared error (RMSE) evaluation code used by all methods (including ours) computes mean absolute error (MAE).
1. SRD Dataset Evaluation
set the paths of the shadow removal result and the dataset in evaluation/demo_srd_release.m and then run it.
demo_srd_release.m
Get the following Table 1 in the main paper on the SRD (size: 256x256):
| Method | Training | All | Shadow | Non-Shadow |
|---|---|---|---|---|
| DC-ShadowNet | Unpaired | 4.66 | 7.70 | 3.39 |
| Input Image | N/A | 13.77 | 37.40 | 3.96 |
For SRD (size: 640x840):
| Method | Training | All | Shadow | Non-Shadow |
|---|---|---|---|---|
| DC-ShadowNet | Unpaired | 6.57 | 9.84 | 5.52 |
2. AISTD Dataset Evaluation
set the paths of the shadow removal result and the dataset in evaluation/demo_aistd_release.m and then run it.
demo_aistd_release.m
Get the following Table 2 in the main paper on the AISTD (size: 256x256):
| Method | Training | All | Shadow | Non-Shadow |
|---|---|---|---|---|
| DC-ShadowNet | Unpaired | 4.6 | 10.3 | 3.5 |
For AISTD (size: 480x640):
| Method | Training | All | Shadow | Non-Shadow |
|---|---|---|---|---|
| DC-ShadowNet | Unpaired | 6.33 | 11.37 | 5.38 |
3. LRSS Soft Shadow Dataset Evaluation
set the paths of the shadow removal result and the dataset in evaluation/demo_lrss_release.m and then run it.
demo_lrss_release.m
Get the following Table 3 in the main paper on the LRSS dataset (size: 256x256):
| Method | Training | All |
|---|---|---|
| DC-ShadowNet | Unpaired | 3.48 |
| Input Image | N/A | 12.26 |
Train
Shadow-Free Chromaticity
- Implement On the removal of shadows from images (TPAMI,05) and Recovery of Chromaticity Image Free from Shadows via Illumination Invariance (ICCV,03)
[Update] We have released our MATLAB and Python implementations on Sep 8, 2023. We recommend the MATLAB.
1.1 MATLAB: inputs are in 0_Shadow-Free_Chromaticity_matlab/input/, outputs are in 0_Shadow-Free_Chromaticity_matlab/sfchroma/.
0_Shadow-Free_Chromaticity_matlab/physics_all.m
1.2 Python: inputs are in 0_Shadow-Free_Chromaticity_python/input/, outputs are in 0_Shadow-Free_Chromaticity_python/sfchroma/.
0_Shadow-Free_Chromaticity_python/physics_all.py
- Download datasets and run
0_Shadow-Free_Chromaticity_matlab/physics_all.mto get the Shadow-Free Chromaticity Maps after Illumination Compensation, and put them in thetrainCfolder, you should see the following directory structure.
${DC-ShadowNet-Hard-and-Soft-Shadow-Removal}
|-- dataset
|-- SRD
|-- trainA ## Shadow
|-- trainB ## Shadow-free
|-- trainC ## Shadow-Free Chromaticity Maps after Illumination Compensation
|-- testA ## Shadow
|-- testB ## Shadow-free
python main_train.py --dataset SRD --datasetpath [path_to_SRD dataset] --iteration [iteration]
[Update] We have releasedDCShadowNet_train.pyon Dec 7, 2022.
Shadow-Robust Feature
Get the following Figure 5 in the main paper, VGG feature visualization code is in the feature_release folder,
python test_VGGfeatures.py
Results in: ./results_VGGfeatures/shadow_VGGfeatures/layernumber/imagenumber/visual_featurenumber_RMSE.jpg
Acknowledgments
Code is implemented based U-GAT-IT, we would like to thank them.
One trick used in networks.py is to change out = self.UpBlock2(x) to out = (self.UpBlock2(x)+input).tanh() to learn a residual.
License
The code and models in this repository are licensed under the MIT License for academic and other non-commercial uses.
For commercial use of the code and models, separate commercial licensing is available. Please contact:
- Yeying Jin ([email protected])
- Robby T. Tan ([email protected])
- Jonathan Tan ([email protected])
Citation
If this work is useful for your research, please cite our paper.
@inproceedings{jin2021dc,
title={DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network},
author={Jin, Yeying and Sharma, Aashish and Tan, Robby T},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5027--5036},
year={2021}
}