LightGlue
LightGlue copied to clipboard
Performance advice
Although the adaptive stuff is very cool for per-image pair evaluation, I have found that batching together in 32 offers order of magnitude better speed-up. So if you do 3D reconstruction, just write some batching script and enjoy the speed-up.
The adaptive mechanisms indeed don't yet support batching well. We could however conservatively exit when all pairs in the batch are ready and prune to the large number of keypoints retained across the batch.
@ducha-aiki Sorry to ask this but would it possible to share a snippet of the batching code? If not possible, some intuition how would one go about it? I am using SuperPoint + LightGlue and these are things I have tried up till now,
- Superpoint batching (failed): tried updating lightglue/utils.py ImagePreprocessor and Extractor but towards the end in SuperPoint
torch.stack
fails because keypoints are of different sizes - Ran Superpoint for batch_image size individually in a for loop and tried creating a dictionary
{image0: dict, image1: dict}
but failed again because keypoints are of different sizes
@udit7395 your version 2 is correct:
Ran Superpoint for batch_image size individually in a for loop and tried creating a dictionary
What you also should do, is to reduce threshold to zero and reduce nms. Finally, consider padding with random descriptors for those, which are still less than needed.
@ducha-aiki any chance you could share an example of how to run superpoint in batches to extract keypoints?