multi-temporal-crop-classification-baseline icon indicating copy to clipboard operation
multi-temporal-crop-classification-baseline copied to clipboard

Error using 4x GPU

Open robmarkcole opened this issue 1 year ago • 2 comments

Setting gpu_devices=[0, 1, 2, 3] and calling compiled_model.fit I receive:

RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu

Debugging

robmarkcole avatar Jan 08 '24 16:01 robmarkcole

Hi Robin, It seems that you have fixed the issue regarding training with multiple GPUs. However, please notice that the current implementation uses 'dataparallel' to wrap the model. The problem is when you use multiple GPUs you need to sync the running average of BN layer from each of those GPUs and the current implementation does not have a mechanism to handle it. For the moment it would be safer to only use a single GPU. One solution is to implement synchronized Batch Normalization that works with dataparallel. A more optimized choice is to use 'DistributedDataParallel' to wrap your model which is more advanced but also more complex to implement as we require to use 'DistributedSampler' to make sure each GPU sees a unique subset of the dataset and requires careful setting of the rank and world_size. We plan to include the capability soon in the next version of the repo but for now please be cautious of possible performance degradation while using multiple GPUs.

samKhallaghi avatar Jan 22 '24 19:01 samKhallaghi

For context, I typically use pytorch-lightning to handle the training loop, which handles the low level details on synchronisation. For the time being, I am making do with single GPU with this implementation

robmarkcole avatar Jan 24 '24 15:01 robmarkcole