LoFTR icon indicating copy to clipboard operation
LoFTR copied to clipboard

Provide additional details on HPatches evaluation

Open Parskatt opened this issue 3 years ago • 8 comments

Hi, I'm interested in how you calculated the AUC for homographies on HPatches exactly.

Related: #105

Parskatt avatar Jan 12 '22 13:01 Parskatt

Hi, I've checked the realated issue. We use the implementation of AUC from SuperGlue. Here's the code in our repo: https://github.com/zju3dv/LoFTR/blob/94e98b695be18acb43d5d3250f52226a8e36f839/src/utils/metrics.py#L151-L154

And as stated in the paper, the error term is defined as error = l2norm(homography_projection(corners, H_gt), homography_projection(corners, H_estimate)). To compute the H_estimate, we use cv2.findHomography(src_pts, tgt_pts, cv2.RANSAC, 3.)

zehongs avatar Jan 18 '22 02:01 zehongs

Thanks for the response.

From looking at your code your thresholds are always set to [5,10,20]. In the paper you report [3,5,10]. Is [5,10,20] the actual thresholds you used?

If you have the time, could you try to help reproducing the results in this repo?

https://github.com/GrumpyZhou/image-matching-toolbox/issues/20

Parskatt avatar Jan 18 '22 06:01 Parskatt

There these results were gotten (albeit with ransac threshold 2px)

Opencv with ransac threshold = 2 at error thresholds [3, 5, 10] px

SuperPoint: 0.37 0.51 0.68 SuperPoint+SuperGlue: 0.39 0.53 0.71 CAPS (w.SuperPoint): 0.33 0.49 0.67 LoFTR (all matches): 0.48 0.6 0.74

Parskatt avatar Jan 18 '22 06:01 Parskatt

The results seem to align with your own, but with all methods scoring lower, so there seems to be some discrepency.

Parskatt avatar Jan 18 '22 06:01 Parskatt

From your paper: image

Should this be interpreted as that you also scale the homography?

Parskatt avatar Jan 18 '22 07:01 Parskatt

Yes, this follows Sec 7.3 of the SuperPoint paper. By scaling the images and then comparing the corner AUCs, the evaluation results are consistent across all image pairs. Otherwise, the evaluation can easily deteriorate with large image inputs. You can also replace the pixel-level threshold with normalized pixel-level threshold.

zehongs avatar Jan 19 '22 02:01 zehongs

Hiiiii @zehongs, I'm also trying to reproduce the result on HPatches. In your last comment above, did you mean that you scaled the ground truth homography to evaluate on the scaled images? Or did you do like this: you scaled the images and estimated the matches. After that, you re-scaled the matches back to the original resolution and perform the evaluation with the original ground truth homography.

TruongKhang avatar Mar 03 '22 02:03 TruongKhang

@zehongs We have been able to reproduce the numbers in https://github.com/GrumpyZhou/image-matching-toolbox/issues/20 , if you could comment on/clarify the below points it would be very helpful:

  • Is it correct that the corner errors are computed in the downsized images?
  • When you write "shorter dimensions equal to 480" do you mean that as in SuperPoint all images are rescaled to 480x640 (or 640x480), or do you keep the aspect ratio of the images so that the dimensions are 480x?? or ??x480?
  • Do you use any special hyperparameter settings for the HPatches experiments? For instance I found that increasing MATCH_COARSE.THR=0.5 yields better results than the default 0.2. Also I noticed this comment: https://github.com/zju3dv/LoFTR/blob/5d6c83428ab57987e5c4d42374b86e8b1f9cb520/test.py#L49-L51
  • What settings did you use for cv2.findHomography? EDIT: I see now that you have specified this further up in this thread.

Thank you!

georg-bn avatar Mar 30 '22 08:03 georg-bn