Error using 4x GPU
Setting gpu_devices=[0, 1, 2, 3] and calling compiled_model.fit I receive:
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
Debugging
Hi Robin, It seems that you have fixed the issue regarding training with multiple GPUs. However, please notice that the current implementation uses 'dataparallel' to wrap the model. The problem is when you use multiple GPUs you need to sync the running average of BN layer from each of those GPUs and the current implementation does not have a mechanism to handle it. For the moment it would be safer to only use a single GPU. One solution is to implement synchronized Batch Normalization that works with dataparallel. A more optimized choice is to use 'DistributedDataParallel' to wrap your model which is more advanced but also more complex to implement as we require to use 'DistributedSampler' to make sure each GPU sees a unique subset of the dataset and requires careful setting of the rank and world_size. We plan to include the capability soon in the next version of the repo but for now please be cautious of possible performance degradation while using multiple GPUs.
For context, I typically use pytorch-lightning to handle the training loop, which handles the low level details on synchronisation. For the time being, I am making do with single GPU with this implementation