Testing
Hi just wondered if there is a reason that you still use the same batch normalisation layers when testing? Surely they should now have their is_training flag set to true?
I second that. Actually the train=True flag in batch_norm already exists, but isn't used.
On the other hand the batch_norm documentation says "One can set updates_collections=None to force the updates in place, but that can have speed penalty, specially in distributed settings." [1] So it seems like this code is always in training mode, even at test time.
So in the sampler would it be correct to use the batch layers but with train=False? How would this be achieved? How can you change the flag of the layers after training?
There seems to be a veery long conversation on this here: https://github.com/tensorflow/tensorflow/issues/1122#issuecomment-232535426 It's all a work in progress, but apparently you can use a tf.cond variable that indicates whether you are currently training or testing.
For stylization this isn't as clear. Since iirc pix-2-pix is only ever operating on a batch of 1, using this as they have is equivalent to Instance Normalization.
https://arxiv.org/abs/1607.08022
in the paper they say they use bn in test time just like in train time.