mmagic
mmagic copied to clipboard
UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default
When I use multiple GPUs to train basicvsr, this warning is reported and it keeps getting stuck. How to solve it? command to run: ./tools/dist_train.sh ./configs/restorers/basicvsr/basicvsr_reds4.py 2 UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
This is not actually an error, just a notification. If your CPU resources are abundant, you can modify corresponding options in the config file.
such as:
opencv_num_threads=4
omp_num_threads = 4
mkl_num_threads = 4
Closing due to inactivity, please reopen if there are any further problems.
I had the fuck same problem . Had u guys fixed it uup , buddy ? How did u solve the shit ? @wangruohui @Ashore-lz
I waited fuck long time for this warning . And then it ran , but another shiit was reported as below:
Damn it ! How can i solve it , bro ? ???? I really need u help and really fucking appreciate it .
I had the fuck same problem . Had u guys fixed it uup , buddy ? How did u solve the shit ? @wangruohui @Ashore-lz I waited fuck long time for this warning . And then it ran , but another shiit was reported as below:
Damn it ! How can i solve it , bro ? ???? I really need u help and really fucking appreciate it .
Hi @arkerman, I think it is not the same problem. As you can see in your error message, 'need at least one array to concatenate'. It often happens when you set the wrong data path to load data. Please check the path to your datasets.
Hi , @zengyh1900 ! So nice of u ! And thanks a loot for your suggestion ! I really really really fuck appreciate it ! And now i'm trying to check the data path . Thanks a lot anaginn !!
When I use multiple GPUs to train basicvsr, this warning is reported and it keeps getting stuck. How to solve it? command to run: ./tools/dist_train.sh ./configs/restorers/basicvsr/basicvsr_reds4.py 2 UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
have you solved this problems? I encounter the same problem as you. Multi-gpus distributed training gets stuck below:
UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
warnings.warn(