MapTR icon indicating copy to clipboard operation
MapTR copied to clipboard

Abonormal loss tendency when traning in smaller batch size

Open mxcheeto opened this issue 1 year ago • 1 comments

When reproducing the experiment under maptr_tiny_r50_24e.py using 2 batches (per GPU) x 4 GPUs (2080 Ti), the obtained loss results seem abnormal, especially the loss_dir group: image image

However, the reproduction using 2 batches (per GPU) x 8 GPUs (2080 Ti) went well in the overall procedure and I can see the desired loss tendency. Does anyone ever meet the same problem? What might be the cause?

mxcheeto avatar Apr 01 '23 10:04 mxcheeto

where can i set the batch size

123dbl avatar Aug 21 '23 09:08 123dbl