vocal-remover
vocal-remover copied to clipboard
RuntimeError: CUDA error: invalid device ordinal
I just installed everything and tried the script. I have 2 GPUs, the internal one is 1GB and NVIDIA 2GB. When I use command gpu 0 it tells me you ran out of memory but when I use gpu 1 it says the following
c:\vocal-remover>inference.py --input 1.wav --gpu 1
loading model... Traceback (most recent call last):
File "C:\vocal-remover\inference.py", line 104, in
*PS. I am a noob with 0 knowledge and would love a simple run down on how to fix this. Thank you.
In any case, probably 2GB will not be enough. In my case VRAM consumption is 2560MB.
In any case, probably 2GB will not be enough. In my case VRAM consumption is 2560MB.
In my case by using the option --window_size 384
I managed to make it work with 2GB of VRAM (see #24 ) so the VRAM size might not be the problem.
I just installed everything and tried the script. I have 2 GPUs, the internal one is 1GB and NVIDIA 2GB.
What you mean "internal one" ? An integrated graphic GPU ? If so, I'm pretty sure CUDA isn't available for integrated graphics GPUs so CUDA detects only 1 GPU (indexed with '0'): try running with python the commands
import torch
torch.cuda.device_count()
and it should print "1" (i.e. 1 NVIDIA GPU found).
In that case, the out of memory error is related to the NVIDIA GPU with 2 GB of VRAM: try to run the command
inference.py --input 1.wav --gpu 0 --window_size 384
In any case, probably 2GB will not be enough. In my case VRAM consumption is 2560MB.
In my case by using the option
--window_size 384
I managed to make it work with 2GB of VRAM (see #24 ) so the VRAM size might not be the problem.I just installed everything and tried the script. I have 2 GPUs, the internal one is 1GB and NVIDIA 2GB.
What you mean "internal one" ? An integrated graphic GPU ? If so, I'm pretty sure CUDA isn't available for integrated graphics GPUs so CUDA detects only 1 GPU (indexed with '0'): try running with python the commands
import torch torch.cuda.device_count()
and it should print "1" (i.e. 1 NVIDIA GPU found). In that case, the out of memory error is related to the NVIDIA GPU with 2 GB of VRAM: try to run the command
inference.py --input 1.wav --gpu 0 --window_size 384
I noticed that changing the window size greatly reduces conversion quality for me.