Implementation confusion
The RealDAE dataset provides three degradation datasets, namely color, shadow, and bleed. According to the dataset and the paper, it seems that an independent model is trained for each task, but only one model weight is provided here. So, are you training one model or three models here? The division of the dataset and the description in the paper are confusing.
Thank you for your question! There is only one model being trained. The division of the dataset into different degradation types (color, shadow, and bleed) is simply to help analyze the model's performance and provide more insights. During training, these different types of data are mixed together and not trained separately.
Thanks for your reply! But I use the checkpoint you provide can not test same metrics which you provide in your paper. It calculated by basicsr below. Can you provide your evaluation code?
shadow task: 24.64/0.9471 color task: 20.96/0.8852 bleed task:27.86/0.9770
Based on your results, the PSNR is consistent with what’s presented in the paper, but the difference seems to be in the SSIM. This indicates that the forward pass is likely the same, and the issue might be with the SSIM calculation. As far as I know, different libraries may compute SSIM slightly differently. We are using the metrics.structural_similarity function from the skimage library. You could try using this function and see if it gives you consistent results.
Note that we set multichannel=True in our calculation, so the SSIM should be computed as follows:
ssim_value = metrics.structural_similarity(gt_im, predict_im, multichannel=True)
Very good job, thx for your reply.