Learning-to-See-in-the-Dark
Learning-to-See-in-the-Dark copied to clipboard
The comparison with HDRnet
Hi, Thanks for sharing this work and the result is awesome ! @cchen156 I find comparison with hdrnet in the project page , so are you using HDRnet prettrained model from @mgharbi 's repo or you trained the comparison model youself?
I want to reproduce the comparison but seems no night related pretrained model is found for hdrnet, would you like to share the comparison pretrained model ? Thanks
I trained the model using our data. I saved the raw data into 16-bit png files and trained the model using their code.
@cchen156 what is really the format of 16-bit png files you saved . Is it rgb or yuv I do the following: 1)starting from dng raw format, Subtract black level. 2)Apply white balance channel gains. 3)Demosaic to RGB. 4 )Apply lens shading correction. Then I run the pre-training HDR+ model on the above produced output ie. an 16-bit input of tiff format. However, it produce strange colors. Would you please explain it to me in detail? Your kind help is very much appreciated.
What I did is:
- subtract black level
- pack Bayer raw data into RGB channels. The green pixel is the average of two green pixels in each 2*2 block.
- The input data have half resolution due to the packing. So the groundtruth is generated with half_size=True in rawpy post-processing.
- The data is saved in 16bit png files. I did not include the demosaicking for hdrnet, but the result is still not good enough. This is because hdrnet used the guided image to upsample the coefficients, which is very noisy in our case.
Hi @cchen156 , Thanks for reaching out !
From your reply:“The data is saved in 16bit png files” the input images are RGB but is customed preprocessed ?
I tried retrained your model and the result is perfect, what I perfer is to train&test a hdrnet model with the same dataset to do a comparison .
I tried with YUV 16 bit but failed , would you like sharing your preprocess script?
Hi @cchen156 , I have tried trained a hdrnet model (psnr only 21db, act good most time,but lack contrass sometime) and I want to use the same DNG testsets collected to compare both hdrnet(self-trained model) and see-in-the-dark(pretrained model), but I'm not familer with the raw preprocess with your model input, the sony and fuji is quite different(puzzled here) Would you like give a hand on preprocess dng to fit the see-in-the-dark inference input, mainly the raw packing part ,any different with rawpy postprocess? Thanks~