CompactBilinearPooling-Pytorch
CompactBilinearPooling-Pytorch copied to clipboard
multi GPUs error: RuntimeError: arguments are located on different GPUs
For multi GPU, it outputs:
RuntimeError: arguments are located on different GPUs at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:236
How to fix it?
Currently I don't have a multi-gpu environment, sorry about that. You are welcomed to create a PR if you have a solution.
Just by any chance it works, try reduce the batch/tensor size.
Hello, is your problem solved? Can you train on multiple GPUs? Thanks
I encountered the same problem during multi-GPU training.