Colorizing-with-GANs
Colorizing-with-GANs copied to clipboard
A question on the dropout noise
I am a little bit confused about the noise applying method. Does it mean that we apply the dropout layer in each component blocks after the LeReLu while the dropout porbabilities of different blocks are randomly selected ? I am not sure about what the kernel[2] means.
if kernel[2] > 0: keep_prob = 1.0 - kernel[2] if self.training else 1.0 output = tf.nn.dropout(output, keep_prob=keep_prob, name='dropout_' + name, seed=seed)
@huangchaoxing The original GAN formulation requires some noise vector as an input to the network. The network then learns a transformation from noise (uniform distribution) to the data distribution. In CGAN (conditional GAN) since data is partly available as input (grayscale image in our case) the input noise is not very effective! To prevent the network to be completely predictive, some authors suggest
to use dropout as a form of noise in training!
In our early experiments, we used dropout for that matter. However, later we found it not really effective! In the code kernel[2]
refers to the 3rd index of the kernel options used to define a network! As you can see here:
https://github.com/ImagingLab/Colorizing-with-GANs/blob/9abfa2ba5924a90b7cf756352abc00d50364df32/src/models.py#L328-L335
Right now dropout value is set to zero for every kernel option, meaning we are not applying dropout anywhere in the network!