YannikYang1
YannikYang1
Hello,in the author's `loss.py`,the return of `tv_loss` is `self.tv_loss_weight * 2 * (h_tv[:, :, :h_x - 1, :w_x - 1] + w_tv[:, :, :h_x - 1, :w_x - 1])` ,which...
> Sorry,it took so long that I forgot the details of this paper. I remember that the square of the gradient image gave better training results and avoided values less...
> 1. I think it is a computational trick where 'h_tv' and 'w_tv' will keep the same shape. thank you so much for your reply, I'm full of gratitude, and...
> `tv_loss = self.mse_loss(self.tv_loss(out_images) , (self.tv_loss(target_images) + self.tv_loss(target_ir)))` where `self.tv_loss` is to compute the sum of the x- and y-axis image gradients. `tv_loss` is designed to keep the gradients of...
> ``` > h_tv = torch.pow((x[:, :, 1:, :] - x[:, :, :h_x - 1, :]), 2) ---> h_tv (B×C×H-1×W) > w_tv = torch.pow((x[:, :, :, 1:] - x[:, :,...