VFA icon indicating copy to clipboard operation
VFA copied to clipboard

the experimental results did not meet the benchmarks reported in the paper

Open gladdduck opened this issue 1 year ago • 4 comments

Thank you for your contribution to this amazing work. When I used the provided code for training, the map of base-training was 74.1, and the novel ap of 1-shot fine-tuning was only 30, which did not meet the benchmarks reported in the paper. I did not modify any configurations, just changed warmup_iters to 500. Has anyone encountered this situation? Thank you for any responses or solutions.

gladdduck avatar Jan 07 '24 02:01 gladdduck

When I use the provided pretraining weights, the novel AP can reach 58.6. Do I have a problem with my pretraining process? What configurations need to be modified? I trained on a single machine with a single GPU (Tesla V100) targeting only split1.

gladdduck avatar Jan 07 '24 02:01 gladdduck

UPDATE: from #8 i set lr=0.0025 during pretraining, the novel ap gets 52.0, but still much different from the benchmarks?

gladdduck avatar Jan 07 '24 07:01 gladdduck

Hi @gladdduck , I think there are two reasons: (a) One-shot results are more sensitive to random seeds or other training factors. (b) Single-GPU training may be different from 8-GPU training because of the BatchNorm layer of the model.

csuhan avatar Jan 10 '24 12:01 csuhan

UPDATE: from #8 i set lr=0.0025 during pretraining, the novel ap gets 52.0, but still much different from the benchmarks?

Hello, this is also my problem, have you solved it?

dogdog258 avatar Apr 16 '24 08:04 dogdog258