RoIAlign.pytorch
RoIAlign.pytorch copied to clipboard
Hi does your roi align layer support multi gpu?
Hi, Does your roi align layer support multi gpu? I got the weird result when RoI align layer from yours.. For example, if I use 2 gpu, the number of batch of the output tensor is 2 times less.
Can you check?
This may be caused by the wrong box_ind
. Your input boxes will be split into two tensors when you are using 2 GPUs while the box_ind in the second tensor will not change automatically.
Then how can I make box_ind correct?
You can make the values in box_ind corresponding to each GPU when you generate roi.
@longcw it didn't work for me
boxes_index = boxes_index.cuda(input.data.get_device())
Still it fails saying tensors are in different GPUs. It only works when using the GPU with index 0
. So it seems that for some reason I don't understand somewhere the GPU is hardcoded. I've been digging through the code but I couldn't find it unfortunately...
@paucarre Hi, did you make it work with multiple GPUs?
@stomachacheGE @paucarre I found this cmd works for me:
roi= roi.to(input.device)
The idea is that both feature and roi should be on the same device.