Self-Attention-GAN-Tensorflow
Self-Attention-GAN-Tensorflow copied to clipboard
Simple Tensorflow implementation of "Self-Attention Generative Adversarial Networks" (SAGAN)
Hello author! Thanks for your code! In this function: def hw_flatten(x) : return tf.reshape(x, shape=[x.shape[0], -1, x.shape[-1]]) my x.shape[0] is None, how can I change the code?
why?
I found that the shape of the attention map in attention_2.[bs N1 N2]. And N1 is not the same as N2. Is that OK?
What are the significance of modules below and why they are created separately (Just to know) 1) attention() 2) google_attention () Similarly: I am not figure out why below 2...
I wonder wether you have test your code on imagenet? Imagenet results shown in ReadMe seems to be directly drawn from paper, while the celebA's are not. I'd like to...
soft iteration not hard `self.iteration = len(self.data) // self.batch_size` then used for control the repeataion `self.epoch = 200K`
Are you planning to add the conditional batch normalization and projection discriminator similar to cGANs with Projection Discriminator (https://arxiv.org/abs/1802.05637)?
Hi, Great work! I’m wondering if applying BN with scale=True make the network no longer be a Lipschitz-1 function (Which should be the target of SN?) In the paper of...
Thank you for sharing! If I use the structure of VGG+ deconvolution, which layers should I add attention to? Do all 16 layers of VGG need to be added as...
The paper and the code are both for the 2D convolution of the sn limit w, then how to deal with w in the 3D convolution?