PyTorch-GAN
PyTorch-GAN copied to clipboard
PyTorch implementations of Generative Adversarial Networks.
we don't need the clip_value for this version of WGAN
https://github.com/eriklindernoren/PyTorch-GAN/blob/a163b82beff3d01688d8315a3fd39080400e7c01/implementations/pix2pix/datasets.py#L26
loss_c_1 = lambda_cont * criterion_recon(c_code_12, c_code_1.detach()) loss_c_2 = lambda_cont * criterion_recon(c_code_21, c_code_2.detach()) Could someone tell me why use detach() here??
https://github.com/eriklindernoren/PyTorch-GAN/blob/a163b82beff3d01688d8315a3fd39080400e7c01/implementations/acgan/acgan.py#L100 Later on we use Cross Entropy with this output--however, we need raw logits for cross entropy loss...Maybe change it to NLL loss?
@eriklindernoren Is there any reasons why L1 loss is used here instead of using MSE loss for the pixelwise? And why it has to be multiplied with 0.999?
SR-GAN: why use detach while calculating content loss when the VGG model weights are already frozen
The code snippet for content loss is as `# Content loss gen_features = feature_extractor(gen_hr) real_features = feature_extractor(imgs_hr) loss_content = criterion_content(gen_features, real_features.detach())` i dont understand why you've used detach (and also...
Hi! I don't quite understand why the dataset for discogan consists of paired images - the description claims that discogan can discover cross-domain identities in upaired data. Maybe I'm misinterpretig...
It raises RuntimeError: cannot join current thread when the whole training process ends. Nothing was changed in the code.
save_image(img_sample, "images/%s/%s.png" % (opt.dataset_name, batches_done), nrow=8, normalize=True) Gives an error, so I just removed the normalized = True part. save_image(img_sample, "images/%s/%s.png" % (opt.dataset_name, batches_done), nrow=8) Alternatively specify a version of...