Wasserstein-AutoEncoders
Wasserstein-AutoEncoders copied to clipboard
backward MMD
Hello Schelotto
I have a short question ... why
total_loss = recon_loss - mmd_loss total_loss.backward()
and not + mmd_loss as both should be descended in the optimization step ?
Little remark about the RBF kernel, pytorch was throwing me an error because of in place modification before backward.
maybe worth changing res1 += torch.exp(-C * dists_y) by res1 = res1+torch.exp(-C * dists_y) ?
Lastly, is the previously posted issue on IMQ kernel solved ?
Thanks ! Adrien