vaegan
vaegan copied to clipboard
Batch Norm at test mode
Hi, Thanks for the nice code.
It seems that the batch norm implementation does not consider if it is train mode or test mode. So, at test mode the result depends on the batch size and the examples in each batch.
Thank you for pointing out it! Just as you said, it is a bug, so I will fix it.
I'm also wondering how this bug will affect the performance of the code. In testing, the mean and variance should be calculated using the whole training set. Is there any correct theano-based batch normalization code available?
In my experience, it is not affecting the results that much given you test using a large batch size like 100. However, to achieve the best performance this needs to be fixed. This implementation of batch norm looks to be correct: https://github.com/Newmu/dcgan_code/blob/ee12b2d15a3856794b8dae77d1eb263c67c36e47/lib/ops.py
Will you fix this issue soon?