yolov9
yolov9 copied to clipboard
The GPU memory usage seems to be unstable. When setting the batch size to 16 for the smallest model, the GPU memory usage exceeds 24GB.
Could you please explain why setting the batch size to 16 for the smallest models s and t causes memory overflow? Initially, it shows as being below 10GB, but within the same training epoch, it can increase to 20GB or even more. The memory usage seems to be unstable, forcing me to set the memory to 8GB or even smaller.