DnCNN
DnCNN copied to clipboard
Lower depth yields same performance
Hi !
Thanks for publishing your training repository !
I trained the 17 depth model on BSD dataset (train) using the pytorch_training scripts (that I fixed because of some compatibility issues), and it yielded the same result as a model of depth 4 (didn't try to go lower).
It just feels weird that I get the same mean train loss / PSNR with a much lower depth model. I only tried on gray scale images.
Is it possible that something is wrong with the code, or is this result normal ? Is there a thorough study on the performance of DnCNN according to its depth ?
Thank you in advance for your response.