prototypical-networks icon indicating copy to clipboard operation
prototypical-networks copied to clipboard

Can you release detailed configuration?

Open csyanbin opened this issue 7 years ago • 10 comments

Hi Jake, Prototypical networks is really a nice work.

I have run this code to reproduce the results in NIPS paper. However, it seems the results have some differences with the paper.

NIPS2017 paper:

5way1shot 5way5shot 20way1shot 20way5shot
98.8 99.7 96.0 98.9

Reproduced results:

5way1shot 5way5shot 20way1shot 20way5shot
98.4 99.6 94.9 98.6

I run this code several times and get similar results. Can you release your hyper-parameter setting? Or is there any technical trick that may impact the performance?

Here is the cmds I used in 20way-1shot setting:

python scripts/train/few_shot/run_train.py --data.shot 1 --data.test_shot 1 --data.test_way 20 --data.cuda --log.exp_dir=results/20way1shot 
python scripts/predict/few_shot/run_eval.py --data.test_shot 1 --data.test_way 20 --model.model_path=results/20way1shot/best_model.t7 

Thanks.

csyanbin avatar Feb 13 '18 07:02 csyanbin

Hi, I got the same as @csyanbin with python 3.5, cuda 8 and pytorch 0.3

bertinetto avatar Feb 19 '18 18:02 bertinetto

@bertinetto @jakesnell
I also reproduced the experiments using tensorflow with the same setting and parameters, still got similar results. I wonder if I miss some implementation or parameter details?

csyanbin avatar Feb 19 '18 18:02 csyanbin

Hello, I just want to add that I implemented the same algorithm in a slight different way (here), and I got the same as @csyanbin too (except for the 20way1shot where I obtained 95.1%).

Edit / Side Note: just read this paper: https://arxiv.org/pdf/1711.04043v3.pdf and it seems they are reporting different accuracies with ProtoNet too (97.4 | 99.3 | 95.4 | 98.8) (page 8).

dnlcrl avatar Feb 22 '18 08:02 dnlcrl

@dnlcrl I think the results above (97.4 | 99.3 | 95.4 | 98.8) cite the original prototypical paper of ICLR (https://openreview.net/references/pdf?id=BJ-3bnVmg), which is different setting with this git repo. The ICLR version always uses 1-shot training, but this repo uses the corresponding shot.

This is also given in appendix A of https://arxiv.org/pdf/1703.05175.pdf, in line1 and line2 of table4.

csyanbin avatar Feb 23 '18 06:02 csyanbin

@csyanbin Oh I got it, thank you, I must have missed it.

dnlcrl avatar Feb 24 '18 12:02 dnlcrl

Hi, @csyanbin I think the results reported are when training on both the training and validation set. run_trainval.py run this full training using the epoch number validated on the validation set during the first training with the same other hyperparameters.

The results I get are closer to the reported ones but still different:

5way1shot 5way5shot 20way1shot 20way5shot
98.5 99.6 95.3 98.7

yannlif avatar Apr 04 '18 12:04 yannlif

Hi, @yannlif I think validation cannot be combined into the training set for fair comparisons. Also, in this paper, the authors said: "We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only."

Although the performance is slightly better, I think this is not fair for comparison.

csyanbin avatar Apr 04 '18 12:04 csyanbin

@csyanbin

We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only.

This is for the mini-ImageNet experiments. In the Omniglot part:

We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test.

This corresponds to the trainval split. The train split has only 1028 unique characters.

yannlif avatar Apr 04 '18 13:04 yannlif

@yannlif I got it. I think you are right for Omniglot. Thanks for this information. I will try again.

csyanbin avatar Apr 04 '18 13:04 csyanbin

Hi Guys, Does anybody how many training epochs and episodes/epoch was used to reproduce the paper's results (for eg. the miniImagenet dataset) ?

debasmitdas avatar Apr 25 '18 22:04 debasmitdas