image-matching-toolbox
image-matching-toolbox copied to clipboard
Test loftr in inloc
hi,i used image-matching-toolbox to eval loftr for inloc, config: default: &default class: 'LoFTR' ckpt: 'pretrained/loftr/outdoor_ds.ckpt' match_threshold: 0.2 imsize: 1024 no_match_upscale: False example: <<: *default match_threshold: 0.5 imsize: -1 hpatch: <<: *default imsize: 480 no_match_upscale: True inloc: <<: *default pairs: 'pairs-query-netvlad40-temporal.txt' rthres: 48 skip_matches: 20
and then get result as below:

Hi @xunfeng2zkj ,
Could you detail what your issue is?
Can I cite this result and this repo in my paper?
@xunfeng2zkj ,
Do you mean for Loftr? I think in the original paper, they are not using this repo, so I would rather stick to the paper numbers to compare to Loftr on InLoc.
Regarding citation, if you use it to run your own methods, yes you can cite this repo.
On the other hand, I remember I have obtained better results for LoFTR using this repo : (48.5 / 73.2 / 83.8 | 55.0 / 75.6 / 83.2)
Can you try with the config I used:
inloc:
<<: *default
match_threshold: 0.5
npts: 4096
imsize: 1024
pairs: 'pairs-query-netvlad40-temporal.txt'
rthres: 48
skip_matches: 20
Hi, I have also tested LoFTR but only get this result.

This is my setting:

Maybe I need to use the indoor weights?
Hi @ewrfcas ,
As far as I remember their outdoor weights in general worked for me. Have you tried to reproduce loftr on hpatches first https://github.com/GrumpyZhou/image-matching-toolbox/issues/20#issuecomment-1082398225 ? Just incase there are some setup issues with Loftr. And I will probably try this again on InLoc once I find some time to check it. For now you can just tuning a bit the parameters like ransac_thres or match_threshold and see first how they change the performance.
Thanks for your reply. I have tested HPatches with the AUC of 3px:64.58, 5px:74.77, 10px:84.32. Besides, I read the LoFTR paper and find that they used LoFTR-OT, while I used LoFTR-DS. Would it cause the performance gap?
Hi, I have tried another setting: confidence threshold=0.2, and achieve
This result is better than the previous one with a 0.5 threshold:
Any comments?
I remember that I removed the code "try...except..." for the debugging. Would it have any influence on the final results?
Hi @ewrfcas,
For the HPatches, this looks good. According to https://github.com/zju3dv/LoFTR/issues/65, they are using a finetuned version of Loftr-ot. To be honest, I don't remember whether I have tried Loftr-ot yet. Also I have recently pull the latest version of LoFTR so maybe also something has been updated since the last time I run loftr myself using immatch. To be honest, I can not comment much on this without investigate it myself..but I am not sure how long it will take before I get back to you on this.
On the other hand, I can not guarantee that the way I evaluated LoFTR , e.g., how I do the quantization, is the same as what the LoFTR authors did. If you want to compare to LoFTR, I would recommend to use their released number, but you can still use this repo to evaluate your own methods.
Anyhow, thanks for your response. I will try LoFTR-OT and response the results here.
hey guys, i'm using the default setting of this repo to test loftr in Inloc, but the speed is really slow. the tqdm says 6 hours are needed, is that normal ? Thanks! btw, i'm running on a single A100, loftr can run fast on Megadepth with its own repo.
Also, the evaluation doc has some misleading information, in the data tree, the 'InLoc' folder should has a subfolder named 'database', but not 'dataset', according to the 'pairs....txt'. cc @ewrfcas @GrumpyZhou @xunfeng2zkj
There's ~14k pairs for inloc inference pairs. You can check the time used for matching image pairs only and check whether this is normal.
Thx for pointing out the eval doc mistake. I will update.