Johan Edstedt
Johan Edstedt
Because if you train dense matchers on homography they only do well on homography.
Not sure what you mean, could you clarify?
1. Its different depending on the encoder and decoder, the settings should be in the train experiment. Grad clip is 0.01 I think. Basically you can set grad clip thr...
It was trained for 4 days with 4 A100 GPUs. You can also avoid issues by using bfloat16 instead of float16.
I think the original weights are just loaded and then we overwrite them with our checkpoint? If not please let me know. I didn't spend that much time verifying things.
I suggest you visualize the warp and confidence before the sampling, try some different resolutions and see what works best.
Hi, could you be a bit more precise? The refiners use a coarse to fine approach which is common in matching tasks.
Typically you get worse performance that way, you can use more channels at lower resolution. If you use a single network that's difficult.
Not impossible, but I'm not sure what the benefit would be.
Hi, We use a subset of the scenes that should match the ones used by loftr. See megadepth.py for specifics. Table 5 is on megadepth1500 which you can get from...