Learning-to-See-in-the-Dark icon indicating copy to clipboard operation
Learning-to-See-in-the-Dark copied to clipboard

The comparison with HDRnet

Open butterl opened this issue 6 years ago • 5 comments

Hi, Thanks for sharing this work and the result is awesome ! @cchen156 I find comparison with hdrnet in the project page , so are you using HDRnet prettrained model from @mgharbi 's repo or you trained the comparison model youself?

I want to reproduce the comparison but seems no night related pretrained model is found for hdrnet, would you like to share the comparison pretrained model ? Thanks

butterl avatar Jun 26 '18 02:06 butterl

I trained the model using our data. I saved the raw data into 16-bit png files and trained the model using their code.

cchen156 avatar Jun 26 '18 03:06 cchen156

@cchen156 what is really the format of 16-bit png files you saved . Is it rgb or yuv I do the following: 1)starting from dng raw format, Subtract black level. 2)Apply white balance channel gains. 3)Demosaic to RGB. 4 )Apply lens shading correction. Then I run the pre-training HDR+ model on the above produced output ie. an 16-bit input of tiff format. However, it produce strange colors. Would you please explain it to me in detail? Your kind help is very much appreciated.

wzl2611 avatar Jun 26 '18 06:06 wzl2611

What I did is:

  1. subtract black level
  2. pack Bayer raw data into RGB channels. The green pixel is the average of two green pixels in each 2*2 block.
  3. The input data have half resolution due to the packing. So the groundtruth is generated with half_size=True in rawpy post-processing.
  4. The data is saved in 16bit png files. I did not include the demosaicking for hdrnet, but the result is still not good enough. This is because hdrnet used the guided image to upsample the coefficients, which is very noisy in our case.

cchen156 avatar Jun 26 '18 17:06 cchen156

Hi @cchen156 , Thanks for reaching out !
From your reply:“The data is saved in 16bit png files” the input images are RGB but is customed preprocessed ?

I tried retrained your model and the result is perfect, what I perfer is to train&test a hdrnet model with the same dataset to do a comparison .

I tried with YUV 16 bit but failed , would you like sharing your preprocess script?

butterl avatar Jul 12 '18 08:07 butterl

Hi @cchen156 , I have tried trained a hdrnet model (psnr only 21db, act good most time,but lack contrass sometime) and I want to use the same DNG testsets collected to compare both hdrnet(self-trained model) and see-in-the-dark(pretrained model), but I'm not familer with the raw preprocess with your model input, the sony and fuji is quite different(puzzled here) Would you like give a hand on preprocess dng to fit the see-in-the-dark inference input, mainly the raw packing part ,any different with rawpy postprocess? Thanks~

butterl avatar Aug 28 '18 12:08 butterl