LoFTR
LoFTR copied to clipboard
Reproducing the result with 4 (or 8) GPUs
First of all, I sincerely thank the author/implementer for the excellent work. Unfortunately, using a machine with 32 (or 64) GPUs is not very realistic for many researchers, in terms of cost and accessibility.
I understand that the authors haven't done any experiments with fewer GPUs. (https://github.com/zju3dv/LoFTR/issues/46) Is there anyone who attempted to reproduce the result of the paper with 4 or 8 GPUs?
In my case (ScanNet, 4 GPUs), after 12 epochs of learning for a week, the performance is as follows: auc@5 : 18.89 auc@10 : 35.86 auc@20 : 51.96
And it seems that there is still room for improvement. Yes, the loss is yet decreasing, even after a week!
Any reproduction result or valuable comment will be deeply appreciated.
Hi, really lucky to see researcher doing the same thing. I am working with 4 GPUs with batch size of 4. But unfortunately, I am training on only 100 scenes (Scannet), with indices given by the authors. I am still training yet.
I quite wonder how I can get the testing indices? As currently, I am evaluating the model using the validation indices, as I didn't find the testing indices?
I may share my result after 12 epochs. One difference is that I am training the model with pretrained backbone, and freeze the backward for the backbone.
Hi @sangrockEG Have you made any progress on this issue?