hdrnet_legacy
hdrnet_legacy copied to clipboard
How to prepare tfrecords for training on HDR+?
Hi @mgharbi,
Thank you for the code and those pre-trained models. It's a great piece of work!
Recently, I've tried to reproduce the training of HDRNet on HDR+. However, I could only reach a PSNR ~21dB. I think the problem is in data preprocessing since I didn't change any of the other code. I just used ImageFilesDataPipeline instead of HDRpDataPipeline.
For the data preprocessing, I transformed raw images into jpg using "dcraw -6 -w -g 1 1" and "convert". I noticed that you used prepared tfrecords for training. Would you mind sharing your code for preparing those records? or could you please tell me what might be wrong with my data preprocessing?
Any feedback is highly appreciated.
I have the same problem with you.Do you solve it
@wzl2611 haven't yet. I'm trying to tune the hyper-parameters myself based on my pre-processing.
@Awcrr I think you can refer to this blog :https://blog.csdn.net/csuzhaoqinghui/article/details/51377941. Can you leave your QQ number, we communicate with each other
@Awcrr the author said :The pre-trained HDR+ model expects as input a specially formatted 16-bit linear input. In summary, starting from Bayer RAW:
Subtract black level. Apply white balance channel gains. Demosaic to RGB. Apply lens shading correction (aka vignetting correction).
I do not quite understand so What is really the format of the 16-bit linear input?