YOLOP
YOLOP copied to clipboard
change batch size
After reviewing source code and modified segmentation parts, training raised the following error:
RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 10.89 GiB total capacity; 9.69 GiB already allocated; 25.56 MiB free; 9.73 GiB reserved in total by PyTorch)
I changed batch size from 24 to 8, solved this issue.
But what will be the effect of doing this?
modify in ./lib/config/default.py at _C.TEST.BATCH_SIZE_PER_GPU