few-shot detection performance on COCO 2014
hi,
Interesting work.
Following the training and evaluation pipeline of your paper, I have reproduced the full-supervised object detection results on MS COCO 2017 (Table 1). However, the produced few-shot results are significantly worse than that of your paper on MS-COCO 2014 (30shot-seed0, Table 3). Specifically, I first train the model on base classes (60 classes, 99k labeled images) from ImageNet pretraining and then fine-tune it on few-shot labeled images (80 classes, every class has 30 instances). Finally, the results (all classes, base classes, and novel classes) of 5000 validation images are reported as follows. Here, we set all hyperparameters as the same as the fully-supervised training.
| evaluation type | AP | AP75 |
|---|---|---|
| All classes | 29.9 | 32.7 |
| Base classes | 33.0 | 35.9 |
| Novel classes | 22.0 | 23.9 |
| Novel classes (Table 3 in your paper) | 30.0 | 33.7 |
In the above table, we can see that the reproduced results are significantly worse than that of your paper on novel classes (about 10 AP). Could you like to share the results of base classes and all classes with your model, or the hyper-parameters of few-shot training (for example, epoch, learning rate, and more)?
Best, Bin-Bin ([email protected])
Same question. About 10 AP lower. In expectation of the proper configurations revealed. Thanks.
Do you get answer now? I notice that, in few-shot training different component of pictures results with significant differences, although not in this experiment. Maybe, bad pictures cause your bad result. @gaobb @generalvisionyyt
Hello, apologies for dropping the ball here. Unfortunately, we've found an issue in our evaluation pipeline and you are correct. I'm now uploading a new version with a fix and will also make the few-shot checkpoint availiable. These are the results we are seeing:
| evaluation type | Novel AP 10 | Novel AP 30 | Novel AP75 10 | Novel AP75 30 |
|---|---|---|---|---|
| Supervised | 12.4 | 20.4 | 13.3 | 21.8 |
| DETReg | 13.7 | 22.6 | 15.1 | 24.3 |
How did you conduct the fine-tuning experiment, as he did not provide the few shot object detection fine-tuning command?