XyChen
XyChen
You can send the input image and the corresponding reconstructed image with the align ratio to this e-mail `[email protected]`. I will check it in my free time.
Yes. You can do experiments to validate it or observe the implementation of the KL loss by Pytorch if you want.
@hahahaprince In a sense, the Tanh_L1 loss is designed for normalization.
@ittim4 In this repo, HDR values are not processed into [0, 1] in the data reading phase, while you can think that the **Tanh** function used in **Tanh_L1** loss function...
@ittim4 GT_linear = GT_aligned ^ gamma. Linear signal is not strictly related to the value of nits for display-referred data.
@ittim4 Q1: The parameters of the tone mapping algorithm are default provided by the organizor. You can also use 100th percentile as norm_perc value. Q2: The comments are also provided...
@UdonDa The range of `real_H` is [0, M] after it is normalized by the alignratio value. when L1 loss is calculated, `fake_H` and `real_H` are processed by the `Tanh` function...

@zhangqizky Please refer to this [file](https://github.com/chxy95/HDRTVNet/blob/main/video_links.txt) for the links. BTW, the hdr images are not in wrong color gamut, but they need to be generated into video format for right...
@zhangqizky 174 images are extracted from the original testing videos by one frame every two seconds, then we deleted the duplicated scenes.