keras-adversarial
keras-adversarial copied to clipboard
I am writing a book on keras and wanted to mention your work on GANs
In particular, I will use your examples for CIFAR-10 and MNIST. Hope that this is fine with you Cheers //A
@agulli I am reading your book! Thank you for writing it. I am reviewing the convolutional GAN example for MNIST from Chapter 4.
I wondered why in example_gan_convolutional.py
, some of the Conv2D
layers include activation='relu'
but are immediately followed by a LeakyReLU
layer. By definition, won't the relu
activation applied to the output of the Conv2D
mean that everything coming out of that layer is greater than or equal to zero? So that the leaky part of LeakyReLU
can never have any effect?
Here is a linked example of what I mean: https://github.com/bstriner/keras-adversarial/blob/master/examples/example_gan_convolutional.py#L51
Maybe I am missing something about the way these two activations will interact with each other? Any pointers to help understand this would be great!
Thanks!
@agulli @spearsem good catch! I'll blame it on some inconsistent merges or something. You should only have one or the other in a row. It works as if it is a single relu, so no one noticed until now. I'll update the examples.
OK