LoFTR icon indicating copy to clipboard operation
LoFTR copied to clipboard

Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021, T-PAMI 2022

Results 80 LoFTR issues
Sort by recently updated
recently updated
newest added

```data['mask0']``` and ```data['mask1']```, indicating the padded regions, are coarse-level as presented in ```megadepth.py.``` while those masks are used to mask out the original-level matching points [here](https://github.com/zju3dv/LoFTR/blob/df7ca80f917334b94cfbe32cc2901e09a80e70a8/src/loftr/utils/supervision.py#L56). Even in case where...

Does the size of the input image have to be an integer multiple of 64?

Thank you very much for your excellent work. I recently reproduce the training results on 4 3090 GPUs for 30 epochs based on README. The batch size each GPU is...

Hi authors, thanks for your great works! I see the F&Q, since we cannot reach D2_net pre-processing data anymore, you suggest us to use MegaDepth directly. May I ask how...

Is the purpose of masks to provide padding at the border of the image? https://github.com/zju3dv/LoFTR/blob/b4ee7eb0359d0062e794c99f73e27639d7c7ac9f/src/loftr/loftr.py#L35 I have a scenario where I want to detect feature matches in an image except...

在cparse_matching.py文件的第245行,感觉index是反过来的,为什么是 mkpts0_c = torch.stack( [i_ids % data['hw0_c'][1], i_ids // data['hw0_c'][1]], dim=1) * scale0, 而不是 mkpts0_c = torch.stack( [ i_ids // data['hw0_c'][1],i_ids % data['hw0_c'][1]], dim=1) * scale0

您好,我看源代码在生成gt时会使用megadepth数据集中depth0目录中的.h5文件,我用你们的代码解析.h5文件中的['depth']返回的全是0请问这个怎么解决啊?

My goal is to detect key points on two images, and I am wondering if your model can be restricted to a specific area.

Has anyone tried to get this running using tensorRT or C++? It would be good to know the rough fps for this. Cheers,