Learning-to-See-in-the-Dark
Learning-to-See-in-the-Dark copied to clipboard
Why psnr values differ so much when input Sony and Fuji rgb datasets
It's hard to ignore that in your paper Table 3, the psnr of Sony rgb input is 17.4 while another psnr is 25.11. I couldn't understand why they differ so much when both are rgb images. I mean,if input rgb images, they must have gone through the same process before put into the network. And would you mind to explain how you convert your raw datasets to rgb images, and how to understand this difference?
I used rawpy to process the raw images to sRGB images. And then use these images as input to train the network for comparisons.
@Chen Chen, Could you please provide the code about how process the raw images to sRGB for experiment correctly?
Hi @KeqiWangSXuniversity,
This is from the author's code itself. Assuming raw
is the RAW data, and rgb
is the processed RGB data you'd like to have, you can do -
import rawpy
rawpath = '' # Provide the path to the raw file here
raw = rawpy.imread(rawpath)
rgb = raw.postprocess(use_camera_wb=True, half_size=False, no_auto_bright=True, output_bps=16)