MapTR
MapTR copied to clipboard
Abonormal loss tendency when traning in smaller batch size
When reproducing the experiment under maptr_tiny_r50_24e.py using 2 batches (per GPU) x 4 GPUs (2080 Ti), the obtained loss results seem abnormal, especially the loss_dir group:
However, the reproduction using 2 batches (per GPU) x 8 GPUs (2080 Ti) went well in the overall procedure and I can see the desired loss tendency. Does anyone ever meet the same problem? What might be the cause?
where can i set the batch size