KernelGAN icon indicating copy to clipboard operation
KernelGAN copied to clipboard

How do you get ground truth ?

Open claroche-gpfw-zz opened this issue 4 years ago • 11 comments

Hi,

Thanks a lot for the code!

In the paper and the project page, you compare the estimated kernels with ground truth kernel but how do you get ground truth?

Thanks!

claroche-gpfw-zz avatar Jul 30 '20 14:07 claroche-gpfw-zz

We created a synthetic dataset we named DIV2KRK by downscaling HR images with random kernels. These random kernels are the GT kernels we compare to. For more info regarding DIV2KRK, see the paper

sefibk avatar Jul 30 '20 18:07 sefibk

Thanks for your quick answer.

You mention that you created a paired synthetic dataset by choosing random Gaussian Kernels and that this kernel is supposed to be the ground truth.

If you train kernelGAN taking patches from the HR image as input and patches (2 or 4 times smaller) from the LR synthetically degraded image as the real space in the discriminator, I agree that it will estimate the synthetic down-sampling kernel so you can have ground truth. However, in the case of kernelGAN, you only use the HR images by taking patches of size n as the input space and patches of size n/sf as the output space. In this case, I do not understand how you can get ground truth.

Is it the first case you use or am I missing something?

Thanks

claroche-gpfw-zz avatar Jul 31 '20 10:07 claroche-gpfw-zz

Sorry but you are "missing something"... All we take is the LR images that were generated using random kernels from the HR images. Both HR and kernel are UNKNOWN to KernelGAN, it trains solely on the LR image and downscales the LR to LR/2. Therefore, it tries to recover the synthetic kernel that relates the original HR and the LR it got as input. After estimating that kernel => ZSSR performs SR to try and recover the original HR. Hope this clears things up.

sefibk avatar Jul 31 '20 10:07 sefibk

Yes it is clear. Thanks a lot!

claroche-gpfw-zz avatar Jul 31 '20 10:07 claroche-gpfw-zz

Sorry but you are "missing something"... All we take is the LR images that were generated using random kernels from the HR images. Both HR and kernel are UNKNOWN to KernelGAN, it trains solely on the LR image and downscales the LR to LR/2. Therefore, it tries to recover the synthetic kernel that relates the original HR and the LR it got as input. After estimating that kernel => ZSSR performs SR to try and recover the original HR. Hope this clears things up.

Hello sefibk. Thank you for your code. May I know the GT kernel is generated by https://github.com/assafshocher/BlindSR_dataset_generator code? Also, If I want to estimate the kernel for DPED dataset, is that I can directly feed the image into the train.py without any preprocessing?

hcleung3325 avatar Jan 11 '21 06:01 hcleung3325

Yes - the GT kernel is from the repo. I am not familiar with DPED dataset - care to share the link?

sefibk avatar Jan 11 '21 07:01 sefibk

Yes - the GT kernel is from the repo. I am not familiar with DPED dataset - care to share the link? Thanks. For the input images in the train.py, do I need to use the GT kernel to blur the image first? Let say, I generate a kernel A.mat from image_A.png, is that I can only apply A.mat to blur the image_A.png for LR and then feed to my SR algorithm?

hcleung3325 avatar Jan 11 '21 07:01 hcleung3325

Sorry I can't understand. What are you trying to do?

sefibk avatar Jan 11 '21 07:01 sefibk

Sorry I can't understand. What are you trying to do? Thanks. I want to use a dataset to estimate the kernels. Let say, I estimate a kernel A.mat from image_A.png by the KernelGAN, is that I can only apply A.mat to blur the image_A.png for LR and then feed to my SR algorithm?

hcleung3325 avatar Jan 11 '21 07:01 hcleung3325

yes, you can do that. KernelGAN estimates the kernel that relates the input image to it's SR image. So given an image + kernel (estimated by KernelGAN) - if you feed them to your SR algorithm, it should produce the real SR image

sefibk avatar Jan 11 '21 08:01 sefibk

yes, you can do that. KernelGAN estimates the kernel that relates the input image to it's SR image. So given an image + kernel (estimated by KernelGAN) - if you feed them to your SR algorithm, it should produce the real SR image

Thank you very much. Can I use the kernel A.mat (estimated by KernelGAN) to blur the image_B.png? Also, compare to GT kernel method (https://github.com/assafshocher/BlindSR_dataset_generator), may I know the advantages of KernelGAN?

hcleung3325 avatar Jan 11 '21 09:01 hcleung3325