ScaledYOLOv4
ScaledYOLOv4 copied to clipboard
OMP_NUM_THREADS error.
I get an error while trying to use distributed training. I have 4 GPUs(Tesla T4) and error shows when using a p7 model. Tried switching to single GPU and same error occurs. But it works with csp model with one gpu.
Error log :
**Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.**
this is not error message, all of ddp training will show this default message.
But my training process is exited after showing this.
Do you have a solution for this? This usually happens when I use more than one GPU with p7 model @WongKinYiu
I also encounter with this problem, @saikrishnadas did you got any solution for this?
My training isn't even killed. It just freezes before it begins.