cascade-rcnn
cascade-rcnn copied to clipboard
Questions about multi-gpu and multiple batch size
I haven't run caffe with multiple gpus before, so I'm a little confused that why the batch size can set larger than 1 in Cascade R-CNN given that the input images are not resized to exactly the same size? As far as I know, the image size should equal in a single batch.
Each gpu is running independently during forward and backward, but the gradients for weights will be aggregated before updating. So the image sizes could be different across GPUs.
Does this setup support batch training (batch > 1) for a single GPU?