gtsfm
gtsfm copied to clipboard
How do you manage models in distributed Dask
Hi guys, I really love using Dask as the backbone for this problem but I have a question:
If you use a GPU enabled model for both feature extraction and feature matching, how will the Dask workers manage the GPU memory required for these tasks?
For example, with superpoint https://github.com/borglab/gtsfm/blob/master/gtsfm/frontend/detector_descriptor/superpoint.py it looks like this class will be initialized on all workers? So do you run the risk of running out of GPU memory if your extractor and matcher GPU models are quite large?
Thanks again for the exciting project