DDP
DDP copied to clipboard
about gpu memory and batch size
I encountered a confusing problem when I tried to train the depth model.
When I executed bash tools/dist_train.sh configs/ddp_kitti/ddp_swinb_22k_w7_kitti_bs2x8_scale01.py 1
with samples_per_gpu=2, the model took the following GPU memory.
However, after I set samples_per_gpu=4, the occupied memory was reduced.
Why would that happen?