Johan Edstedt
Johan Edstedt
Hi! This is the prior we use from SfM. It basically forces the network to detect the SfM points, but allows additional points.
I think the impl is correct. The mnn may return different nums per batch so we need the batch ids to identitfy which pair the match comes from. In practice...
I suggest you use the augmentation of SiLK with our architecture. I have tried making the objective unsupervised but haven't had success so far.
Ouch. Good catch. Fix lgtm. Btw @edgarriba are you using 0 or 0.5 as top left ?
@ducha-aiki could you approve this PR?
There will probably not be, it was just side project I worked on. Feel free to train one yourself, it's pretty cheap (any standard GPU will do fine).
We have a confidence threshold for the matches based on the ds scores. If you remove that youll get more outliers. Otherwise yes, I see no major issue.
Thats great, generalizing resolution is definitely something I would want. As to your question, we set the resolution for the global/coarse matching to be a bit over the train resolution...
Hi, sorry for late response. 1. I think RoMa generally converges slower than LoFTR, yes. However, there is marginal difference between e.g. 4M samples (with updated schedule) and 8M. 2....
Iirc it matters quite a lot how you sample the points, Ill see if I can find my old eval pipeline.