Tangram icon indicating copy to clipboard operation
Tangram copied to clipboard

Question about parallelizing over multiple GPUs

Open brianysoong opened this issue 2 years ago • 3 comments

Hello!

I don't have much experience using PyTorch, and I was wondering if Tangram could be easily modified to parallelize over multiple GPUs? I am trying to map onto a spatial dataset which is quite large (~500k cells) and am running into this error:

RuntimeError: CUDA out of memory. 
Tried to allocate 52.38 GiB (GPU 0; 39.59 GiB total capacity; 
860.74 MiB already allocated; 
37.90 GiB free; 
882.00 MiB reserved in total by PyTorch) 
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

The GPUs I am using have a 40GB capacity so this error makes sense to me. Is there a way to split across 2 GPUs in PyTorch? I also understand that using mode = "cluster" can help alleviate the processing resources required, but was curious about this issue nonetheless.

Thank you!

brianysoong avatar Jul 28 '22 16:07 brianysoong

I have the same question , how to deploy on muti GPU

caiquanyou avatar Jul 25 '23 03:07 caiquanyou

Same! It would be awesome to have an option to parallelize the calculation over multiple GPUs

HeesooSong avatar Aug 03 '23 08:08 HeesooSong