IntrinsicImage
IntrinsicImage copied to clipboard
Why "model:training()" in test_*.lua ?
https://github.com/fqnchina/IntrinsicImage/blob/48fd652e317511157c06648597da121f8d4031a5/evaluation/test_IIW.lua#L30
https://github.com/fqnchina/IntrinsicImage/blob/48fd652e317511157c06648597da121f8d4031a5/evaluation/test_MIT.lua#L25
https://github.com/fqnchina/IntrinsicImage/blob/48fd652e317511157c06648597da121f8d4031a5/evaluation/test_MPI.lua#L25
https://github.com/fqnchina/IntrinsicImage/blob/48fd652e317511157c06648597da121f8d4031a5/evaluation/test_IIW.lua#L30
https://github.com/fqnchina/IntrinsicImage/blob/48fd652e317511157c06648597da121f8d4031a5/evaluation/test_MIT.lua#L25
https://github.com/fqnchina/IntrinsicImage/blob/48fd652e317511157c06648597da121f8d4031a5/evaluation/test_MPI.lua#L25
Maybe it's because there are batch normalization layers within the network structure, which will use global mean and variance if the mode is set to 'eval()' during test phase. By set the mode to "training()", for each input's feature maps, the BN layers just use current features' mean and variance for each channel. As intrinsic decomposition is a pixel-wise prediction task, which is different from those high level tasks such as classification, using global mean and variance in BN layers may degrade current results.