Underwater-Color-Correction icon indicating copy to clipboard operation
Underwater-Color-Correction copied to clipboard

Should I set the is_training flag for the BN layer?

Open falrom opened this issue 6 years ago • 5 comments

I encountered a different output of the same image during the test. Looked at the nets/pix2pix.py file, it seems that the batch_norm function does not set the is_training flag. And when defining the optimizer, it does not add a dependency on UPDATE_OPS. tf.contrib.layers.batch_norm Is it because the network training can't learn a good mean and variance, so that the is_training state of the BN layer is forced to be True? But this results in different outputs when the batch size is different.

Looking forward to your reply! Thank you.

falrom avatar Dec 20 '18 12:12 falrom

I don't know exactly what's your problem, but you SHOULDN't use batch normalization in the decoder part of the generator. Just use it in the encoder part ..

hashJoe avatar Dec 20 '18 14:12 hashJoe

I know what you mean. I did not change their network structure. But I don't think they used the Tensorflow BN layer API in the right way.

falrom avatar Dec 20 '18 14:12 falrom

@falrom 我也是发现了这个作者BN使用不当,思考了很久不知道怎么改,有没有想法?

IPNUISTlegal avatar Jan 08 '19 04:01 IPNUISTlegal

@IPNUISTlegal 非要改动的话,只能把BN按照正确的方式重写了。只是发论文看效果的话我感觉不改问题也不大。 我没有改他的代码。我测了一下,大体上讲一张一张送进去(batch size=1)测试是效果比较好的(最接近他们论文的插图),而且一张一张送进去不会出现输出不稳定的情况。一个多张图片的batch送进去会导致不同的batch情况下,同一张图结果不同。

falrom avatar Jan 08 '19 05:01 falrom

好的,谢谢,作为科研论文,我还是尽量按照BN正确用法吧

IPNUISTlegal avatar Jan 08 '19 07:01 IPNUISTlegal