few-shot-object-detection
few-shot-object-detection copied to clipboard
Experiments on VOC split 1
In my experiment, the results on VOC split 1 is much lower than the paper reports. For example, the mAP50 on the novel set is shot | 1 | 5 paper | 39.8 | 55.7 my result | 34.1 | 49.1
However, the results on split 2 & 3 is similar to the paper report. I use the default params and script for training. What do I need to do to reproduce your results?
I also get the same results with @GuangxingHan. For TFA w/ fc, I only get 34.86 while the paper reports 36.8. I indeed use the official codebase without any modification. Plus, I download the pretrained model in the model zoo that the author provides.
@GuangxingHan Have you try with the model the author released? I get the similar split1 5shot results with the reported one, ~ 55% mAP on novel classes. However, if I train the base model by myself, it will degrade to ~52%. Do you have such problem? Thanks.
Yes, this is same to me. The official released model have better accuracy about 55% mAP on novel class. But if I train it by myself, the result is only about 50%.
I finally got mAP almost same with the reported ones. I found that batchsize and learning rate have to be well selected. If you have less gpu than original setting, you would have to tune the hyperparams.
@mandal4 I indeed ran the code with 8 GPUs as the auther's default setting. Could you provide your hyperparams?
I finally got mAP almost same with the reported ones. I found that batchsize and learning rate have to be well selected. If you have less gpu than original setting, you would have to tune the hyperparams.
Can you get the same performence with the paper? Please share me how to tune the hyperparams with less gpu than original setting? Thanks very much!