OpenIBL
OpenIBL copied to clipboard
[ECCV-2020 (spotlight)] Self-supervising Fine-grained Region Similarities for Large-scale Image Localization. 🌏 PyTorch open-source toolbox for image-based localization (place recognition).
Hi Dr. Ge, Thanks for sharing this nice work! I have successfully reproduced the reported results on Pittsbugh250k. However, on Tokyo247, I got slightly worse results than reported. Here is...
Thank you for your work. If I want to change the threshold of distance, what should I modify in the code
Hello, I tried your pretrained model on cnnimageretrieval-pytorch test script, and I got mAP : 67.90 for oxford (73.9 on your paper) mAP : 76.64 for paris (82.5 on your...
Thank you for open-sourcing the code and the detailed documentation. I want to know whether you evaluate the model on Rparis and ROxford datasets. I have tried to evaluate it...
Thank you for releasing the code. When reproducing SARE results, I am able to reproduce the results in your paper when I use the dot product based code, but not...
Hello, thanks to your amazing work! I want to visualize the feature map like Fig.5 in paper, but I don't know how to do it, could you help me please?...
Hello, thanks a lot for this valuable work. I tried your extract.py for image trieval during visual localization, it's great. But I want to train your SFRS on my own...
nice work, could you please upload the pitts and tokyo dataset to a google drive? I find nowhere to get them
I just run the code without any change and find the initial recall scores is as follow: Recall Scores: top-1 1.2% top-5 4.6% top-10 8.7% then I continue the training...
Thanks for your work! I saw you are using T.Normalize(mean=[0.48501960784313836, 0.4579568627450961, 0.4076039215686255], std=[0.00392156862745098, 0.00392156862745098, 0.00392156862745098]) in get_transformer_train and get_transformer_test, different from the usually used: T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),...