LearningToCompare_FSL
LearningToCompare_FSL copied to clipboard
Why no model.eval() in test code?
thanks for your code! but a problem confuse me. why didn't you use feature_encoder.eval() and relation_network.eval() in your test code? it actually has an impact on the results.
Yes, I tried and interestingly I have a better result when not using eval()
@floodsung You need to use eval(), otherwise you won't fix the BachNorm statistics. See this
@ehsanmok agree,but I test it 10% test accuracy drop.
@floodsung If you do not use eval() in test phase, you will determine the BatchNorm statistics through seeing other query images. This is cheating to some extent, because each query image only see the support images in each episode. Determining the BatchNorm statistic through a big batch of query images will trun few-shot learning task into a many-shot learning task.
@IbsenChan BUT why the degradation happens when using eval()? Theoretically Relation Net should behave better than ProtoNet, i.e. performance over 65% (training protonet with 5-way 5-shot), according to its novel idea of learning a metric in the paper. But with eval() I only get performance no higher than 60%.
Do you have any ideas about the degradation? Is it the problem of implementation or the idea itself?
@tlittletime @Bigwode @IbsenChan @itongworld I haven't tested the code yet, but I believe that is because the authors wrongly set momentum=1 for all batch norm layers, which makes the BN layers always save the current batch's stats and discard all previous stats. This might be the possible reason why do not use eval will yield better results.