Update tips for WGANs
Wasserstein GANs are supposed to fix some of the challenges of GANs.
- No more vanishing gradients!
- No more mode collapse!
- Loss function is more meaningful.
Some of the tricks in here seem not compatible or no longer relevant with WGANs. Anyone have experience with WGAN and can comment on which tricks are applicable and which one are not?
Thanks for a great repo!
Just found this related issue, for reference: https://github.com/igul222/improved_wgan_training/issues/44
+1
The "tricks" explained in the improved version of the Wasserstein objective function works. I don't know about WGANs because I tried the improved version directly and some of the changes don't apply. For more information take a look at the paper https://arxiv.org/pdf/1704.00028.pdf
I agree that it would be nice to have an update on this