progressive-gan-pytorch
progressive-gan-pytorch copied to clipboard
SpectralNorm
First of all, thanks for sharing! Very interesting.
I see that you experimented with SpectralNorm as well, can you share your insights on the effect it had on the training ?
Hello,
I remember that it was similar to use Spectral Normalization with hinge loss versus WGAN loss with gradient penalty. But I didn't tested it extensibly and measured scores like FID.
Recently I tried Spectral Normalization with Self-Attention GAN. It stablizes training and fast, but it seems like that it penalizes discriminator quite a bit. So it was hard to use 1:1 training schedules with Spectral Norm as in Progressive GAN paper. (It seems like that it is quite dependent on model structure, even when Spectral Norm is applied on both generator and discriminator.)
Anyway I will try more on using Spectral Norm with Progressive GAN. I hope I can share more results soon.
Thank you!
Thanks for the detailed answer!
I was actually hoping that, as the paper claims, there will be no need for spectral normalization or gradient penalty, due to the usage of pixel norm, LR "equalization", and the simplified minibatch statistics method.
I think I'll try to reproduce their results as well, to get a good feeling of the topic.
Actually they have used gradient penalty. You can also refer to their reference implementation.
oh, I was not aware of that.
note - I mostly looked at the original theano implementation, as it was the one that generated the results they demonstrated, but failed to spot there gradient penalty:
https://github.com/tkarras/progressive_growing_of_gans/tree/original-theano-version
Seems like that it is in here: https://github.com/tkarras/progressive_growing_of_gans/blob/original-theano-version/train.py#L336
Yes, you are correct. Sorry, misinterpreted the code - thanks! :)