Learning-to-See-in-the-Dark
Learning-to-See-in-the-Dark copied to clipboard
misalignment problem in the dataset
A question in the test_Sony.py is that, the scipy.io
is really slow and cv2.imwrite
is really fast. In the 8-bit saving way, do this two methods inference the finnaly PSNR & SSIM?
You can use any method you like. I evaluate the PSNR and SSIM using MATLAB.
I mean that the image saving code:
scipy.misc.toimage(temp * 255, high=255, low=0, cmin=0, cmax=255).save(
result_dir + '%04d/%05d_00_train_%d.jpg' % (epoch, train_id, ratio))
It's really slow and the cv2.imwrite
is much faster.
By the way, I found in your testset, some images are miss aligned, which affects the PSNR & SSIM a lot. For example the 10034 gt_id, the phenomenon is so obvious.
Yes, I found the problem on 10034. Maybe you can exclude this image when evaluating.
When evaluating, this certain one image can be removed easily. But the PSNR will increase, right? With your pretrained model, the results before and after removing the 10034 pairs are show as follow:
PSNR | SSIM | |
---|---|---|
before | 28.87 | 0.8879 |
after | 29.03 | 0.8888 |
Hmm, the SSIM seems strange, same image used to calculate PSNR & SSIM.
Q1: Are there other misaligned pairs you know?
Q2: Finally, what do you think before publishing the amazing work? The misaligned image pairs hurt the results and your pipeline seems to have nothing to do with the latent misaligned factor.
I do not know how this misalignment happened. If I knew this image before, it should be removed already.
Oh, then I think we should fix the misalignment problems on some images in the dataset. @cchen156
@CQFIO @cchen156 would you provide the solved dataset if you solved this problem? By the way, what's your following schedule? Is there some following paper published, considering the open problems in the conclusion section in the paper?
Hi I checked each test image manually. I found misalignment in 10034, 10045 and 10172. Please remove these images for quantitative evaluations. But you can still use them for qualitative evaluations. I am updating the readme.
Thanks for your check. Besides, did you check the train & val datasets?
We also found that there are many misalignment data in the training sets, such as 00183.
We also found that there are many misalignment data in the training sets, such as 00183.
how do you find misslignment data? I can not find in my eyes ,such as 00183.
@TerryYiDa you could simply use difference graph.
@TerryYiDa you could simply use difference graph.
Hello, could you explain more on it? What is 'difference graph'? Thanks!