CloserLookFewShot
CloserLookFewShot copied to clipboard
the accuracies about baseline and baseline++
hello, I run the code of baseline and baseline++ on CUB. However, the results are lower than the reported ones. python train.py --dataset CUB --model ResNet10 --method baseline++ --train_aug python save_features.py --dataset CUB --model ResNet10 --method baseline++ --train_aug python test.py --dataset CUB --model ResNet10 --method baseline++ --train_aug the results are around 80%, large lower than 85%, is there any thing wrong? thanks!
Hello, sorry that it takes me a while to figure out what I have done. This result actually applied validation for baseline/baseline++. I used a validation method based on DB index (see p.s.) , but this method only worked for CUB dataset and not worked for mini-Imagenet, so I did not include this validation strategy in my paper. However, it is my mistake to use the validated result in the paper.
To reproduce the result, please use the updated code and
- In method/baselinetrain.py:22, set self.DBval = True
- In backbone.py, set self.scale_factor = 4
My reproduced result is 84.40% +- 0.55%. It could be randomly higher or lower a little bit.
p.s. For the detail of this validation method, please see method/baselinetrain.py, line 76~97
Dear author,
Do the two modifications also apply to the Conv4 backbone?
I got 74.92
for baseline++ on CUB with Conv4 backbone, which is quite far away from the 79.34
reported in the paper.
Meanwhile, my baseline++ results on CUB with ResNet10 backbone is 83.78
, which means my modifications should be correct.
Sincerely thanks!