contextual_loss_pytorch
contextual_loss_pytorch copied to clipboard
about Contextual Bilateral Loss
Is the realizetion of Contextual Bilateral Loss based on "zoom learn zoom"? https://github.com/ceciliavision/zoom-learn-zoom
Yes. It is mentioned in README. However, the status of CoBi is now WIP.
Thank you for reply. So can it be used directly now?i want to use it in my own code.
when i use the Contextual Loss,I find it easy for oom.But my batchsize is only one. the input of my x4 sr model is 128*128,can you give me some advise?
This implementation is written on the basic mini-batching concept. In other words, all images or features in a mini-batch are allocated on GPU at the same time, which causes OOM. The original implementation computes distances for every single image or feature and then aggregates them. To avoid OOM, I should follow the original strategy by giving up the beauty of codes.
Oh, I missed that your batch size is 1. It seems to be the other problem. Could you share the codes relevant to the loss computation?
when i training ESRGAN using Contextual Bilateral Loss, without l1/perceptual/GAN loss
all the inference results seem to have artifacts like water ripple in smooth areas
Do you have any idea how might this artifact come about?
@conson0214 Please create a new issue for it