self-attention-gan icon indicating copy to clipboard operation
self-attention-gan copied to clipboard

Results 17 self-attention-gan issues
Sort by recently updated
recently updated
newest added

Hello, Is the GAN trained with a fixed learning rate? The discriminator LR: 0.0004 The generator LR: 0.0001 Are these learning rates decayed? If so, where may I find the...

All generator and discriminator types implemented here are made of either `block()` or `block_no_sn()` modules, which either way have internally a residual connection `x_0 + x` by default. However, in...

`def Nonlocalblock(x): batch_size, height, width, in_channels = x.get_shape().as_list() print("height",height) print("width",width) print("in_channels",in_channels) #print("out_channels",out_channels) print( "shape", x.get_shape()) g1 = tf.keras.layers.Conv2D(in_channels, 1, strides=(1, 1), padding = 'same')(x) g1 = tf.math.multiply(g1,x) print("g1",g1.shape) g =...

When I try to run the code `energy = torch.bmm(proj_query, proj_key)`, the program runs into the RuntimeError: CUDA out of memory. My Graphics card's memory is 12GB and I am...

Hello, Will the use of Conditional Batch Normalization in the generator cancel Spectral Normalization?

I can not find any discrible codes in this package about "self-attention". But in this code, I find much more codes about DCGAN, it makes me very puzzeld. It's so...

Has anybody coded to visualize attention maps for a query point as shown in the paper?

Hello and thank you for the repository! The model training on Imagenet dataset will take a lot of time. Could someone upload a pre-trained model?

Automatic Mixed Precision for Tensorflow has been recently introduced: https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540 This PR adds automatic mixed precision to self attention GAN training. We've tested speed/convergence impact on V100/T4. We have seen...