Johan Edstedt
Johan Edstedt
And yes, this training takes quite a long time. As we report in the paper, it takes about 4 days on 4 A100s. This is currently one of the downsides...
Ok good to hear that code seems to work :D I didn't eval on mega1500 during training so I'm not completely sure what the eval metrics are during training. Here...
So the backbone was pretrained on your dataset and then frozen like in roma? We have an experiment regarding the performance of different frozen backbone features, perhaps you could try,...
I think it's rather due to the implementation of the KDE which is naive: https://github.com/Parskatt/RoMa/blob/main/roma/utils/kde.py 20k matches from the KDE requires starting from 80k and then resampling down to 20k.
https://github.com/Parskatt/RoMa/pull/22
Which batch sizes? During training or testing?
Aha. There might be a bug in the match method when not using image paths. I think the best approach is to simple use the model forward for now. Make...
@paolovic Hiya! Did you forget to wrap forward in inference_mode/no_grad? For reference, during training a batch of 8 at res 560 fills up about 40 GB, so it would make...
Yeah github bugged for me and showed my comment as duplicated, removed one and both disappeared...
In general, yes lower batchsize reduces results. I would not go below 8. Difficult to give advice, but you can decrease resolution for lower mem. You can also reduce mem...