VFA
VFA copied to clipboard
the experimental results did not meet the benchmarks reported in the paper
Thank you for your contribution to this amazing work. When I used the provided code for training, the map of base-training was 74.1, and the novel ap of 1-shot fine-tuning was only 30, which did not meet the benchmarks reported in the paper. I did not modify any configurations, just changed warmup_iters to 500. Has anyone encountered this situation? Thank you for any responses or solutions.
When I use the provided pretraining weights, the novel AP can reach 58.6. Do I have a problem with my pretraining process? What configurations need to be modified? I trained on a single machine with a single GPU (Tesla V100) targeting only split1.
UPDATE: from #8 i set lr=0.0025 during pretraining, the novel ap gets 52.0, but still much different from the benchmarks?
Hi @gladdduck , I think there are two reasons: (a) One-shot results are more sensitive to random seeds or other training factors. (b) Single-GPU training may be different from 8-GPU training because of the BatchNorm layer of the model.
UPDATE: from #8 i set lr=0.0025 during pretraining, the novel ap gets 52.0, but still much different from the benchmarks?
Hello, this is also my problem, have you solved it?