SuperGluePretrainedNetwork
SuperGluePretrainedNetwork copied to clipboard
About homography pretraining for outdoor scenes
Hi, thank you for your great work!
I'm trying to train superglue+superpoint on MegaDepth dataset. As is described in the paper, the weights for outdoor scenes are initialized with homography model due to limited scene number. I'm wondering how much dose this homography pretraining affect the final results. Is it possible to obtain similar results if I train the model from scratch on megadepth or a larger outdoor dataset with more scenes?
After further investigation it turns out that the homography pretraining is maybe not necessary if more scenes are used (with a split similar to DISK), the model is trained for longer with a slower learning rate decay, and positives and negatives are more carefully balanced. As such, I have recently obtained good results training from scratch on MegaDepth. I will be able to release more details later on.
That's great! Looking forward to seeing more details.
@zenmedou Hello, I also recently reproduced the training process of superpoint+superglue on megadepth, but encountered some problems, may I ask if it is convenient for you to give me an email address or qq, I would like to consult with you, thank you very much
After further investigation it turns out that the homography pretraining is maybe not necessary if more scenes are used (with a split similar to DISK), the model is trained for longer with a slower learning rate decay, and positives and negatives are more carefully balanced. As such, I have recently obtained good results training from scratch on MegaDepth. I will be able to release more details later on.
I am very interested in training from scratch on MegaDepth. In the process of trying, I encountered the following problems: 1) when 'batch_size = 8', training from scratch do not converge, end with loss ≈ 0.5 And when i initialized with your released pretrained model, 'superglue_indoor/outdoor', training loss normally converges to 0.1 (higher batch_size bring out similar result)) 2)when 'batch_size = 1', training from scratch converge well, end with loss ≈ 0.1. Since the special mini-batch for batchnorm, the training mode ('superglue.train()') should be use for tesing. In this case, is batchnorn consistent with instancenorm? What is the reason for the above phenomenon? Is it related to the learning rate? or requied a special data balence?
After further investigation it turns out that the homography pretraining is maybe not necessary if more scenes are used (with a split similar to DISK), the model is trained for longer with a slower learning rate decay, and positives and negatives are more carefully balanced. As such, I have recently obtained good results training from scratch on MegaDepth. I will be able to release more details later on.
I am very interested in training from scratch on MegaDepth. In the process of trying, I encountered the following problems: 1) when 'batch_size = 8', training from scratch do not converge, end with loss ≈ 0.5 And when i initialized with your released pretrained model, 'superglue_indoor/outdoor', training loss normally converges to 0.1 (higher batch_size bring out similar result)) 2)when 'batch_size = 1', training from scratch converge well, end with loss ≈ 0.1. Since the special mini-batch for batchnorm, the training mode ('superglue.train()') should be use for tesing. In this case, is batchnorn consistent with instancenorm? What is the reason for the above phenomenon? Is it related to the learning rate? or requied a special data balence?
Could you tell me what's the which are the scenes in Megadepth that you use to do the train/evaluation? What will be the train/evaluation loss when the network converges? Thanks a lot!