ReLA
ReLA copied to clipboard
Can I use referring_swin_base.yaml to reproduce RefCOCO, RefCOCO+, RefCOCOg results in the paper?
Yes you can.
@ntuLC I tried doing this and get the following results:
RefCOCO | RefCOCO+ | RefCOCOg
| val | testA | testB | val | testA | testB | val-U | testU | val-G
ReLA (paper) | 73.82 | 76.48 | 70.18 | 66.04 | 71.02 | 57.65 | 65.00 | 65.97 | 62.70
ReLA (reproduction) | 73.29 | 76.06 | 69.56 | 64.65 | 70.27 | 56.70 | 63.57 | 65.06 | 60.76
which is quite a bit lower than the results published in the paper.
Just curious, from your experience, how stable are the results? Does they have large variance?
by only modifying DATASETS.TRAIN and DATASETS.TEST
hi there, i also want to reproduce the three datasets . I will appreciate it if you can tell me how to modify the yaml in details
Hello,do you know how to reproduce the three datasets now? I don't know either. @Yeemkt
Hello,I am sorry to bother you. I have just modified DATASETS.TRAIN and DATASETS.TEST in referring_R50.yaml, but the results is a little bad.Could you please tell me what changes you have made? @yxchng @ntuLC @Yeemkt
Hello,I am sorry to bother you. I have just modified DATASETS.TRAIN and DATASETS.TEST in referring_R50.yaml, but the results is a little bad.Could you please tell me what changes you have made? @yxchng @ntuLC @Yeemkt
![]()
Hi,
- Are you using the provided R50 checkpoint for evaluating refcoco+? The provided checkpoints are trained on grefcoco only, so they are also only applicable for evaluation on gRefCOCO.
- The RES results reported in the paper are based on Swin-Base, so they are not comparable with results based on ResNet backbones.