BJDD_CVPR21
BJDD_CVPR21 copied to clipboard
running_var in the BatchNorm layers of pretrained weight contains Nan
Hi, I downloaded the pretrain QuadBayer parameters and found most value of self.attentionNet.psUpsampling1.upSample[1].running_var is inf and nan.
In eval() model, the nan value will cause the final image to be a nan image. I'm not sure whether there is some problem here.
Hi there! Thanks for raising such an important issue. Our previously shared weight files were corrupted. However, we have updated the weight files. Here are some sample images obtained from newly uploaded files. If you need any further assistance, please do not hesitate to contact our research group. Thanks, much.
Quad-Bayer reconstruction at sigma = 5

Quad-Bayer reconstruction at sigma = 10

Quad-Bayer reconstructionat sigma = 15

Thank you a lot for your efford and help! I did download the new weight files that are provided at the same urls:
- Quad Bayer: https://drive.google.com/drive/folders/1_ziIMjK9vGg-P_7Wxit96bnfHiO4_wQw?usp=sharing, SHA1: 28ee06277a74050912f8e64fae4336883c58bed4
- Bayer: https://drive.google.com/drive/folders/125hFTHR5qpJy4AKhtjxFhZJ5aPxQI4TE?usp=sharing, SHA1: 37345bf02dd9b05cf3808f6f2465080678773101
Unfortunately, the results as shown here #5 did not improve. To exclude another corruption I provide the SHA1 hashes of the files that I used.
It looks like, that the urls in the README still point to old weight files, that were created 2 years ago (see column "zuletzte geändert" = "last changed"):

As you pointed out, we did not perform any retraining of our model after submitting the paper. I guess it would be unfair and unethical to upload a newly trained model, as it can mislead our readers. Also, we perform the test several times and were able to produce admissible results in every attempt. Could you please check your packages?
Yes, I agree, A retraining should not be done. Can you provide the SHA1 hashes for the weight files; just to exclude that there is something wrong?