vocal-remover icon indicating copy to clipboard operation
vocal-remover copied to clipboard

RuntimeError: CUDA error: invalid device ordinal

Open favodm opened this issue 4 years ago • 3 comments

I just installed everything and tried the script. I have 2 GPUs, the internal one is 1GB and NVIDIA 2GB. When I use command gpu 0 it tells me you ran out of memory but when I use gpu 1 it says the following

c:\vocal-remover>inference.py --input 1.wav --gpu 1 loading model... Traceback (most recent call last): File "C:\vocal-remover\inference.py", line 104, in main() File "C:\vocal-remover\inference.py", line 34, in main model.to(device) File "C:\Users\samsung\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 443, in to return self._apply(convert) File "C:\Users\samsung\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 203, in _apply module._apply(fn) File "C:\Users\samsung\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 203, in _apply module._apply(fn) File "C:\Users\samsung\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 203, in _apply module._apply(fn) [Previous line repeated 2 more times] File "C:\Users\samsung\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 225, in _apply param_applied = fn(param) File "C:\Users\samsung\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 441, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) RuntimeError: CUDA error: invalid device ordinal

*PS. I am a noob with 0 knowledge and would love a simple run down on how to fix this. Thank you.

favodm avatar Jun 08 '20 05:06 favodm

In any case, probably 2GB will not be enough. In my case VRAM consumption is 2560MB.

aufr33 avatar Jun 09 '20 08:06 aufr33

In any case, probably 2GB will not be enough. In my case VRAM consumption is 2560MB.

In my case by using the option --window_size 384 I managed to make it work with 2GB of VRAM (see #24 ) so the VRAM size might not be the problem.

I just installed everything and tried the script. I have 2 GPUs, the internal one is 1GB and NVIDIA 2GB.

What you mean "internal one" ? An integrated graphic GPU ? If so, I'm pretty sure CUDA isn't available for integrated graphics GPUs so CUDA detects only 1 GPU (indexed with '0'): try running with python the commands

import torch
torch.cuda.device_count()

and it should print "1" (i.e. 1 NVIDIA GPU found). In that case, the out of memory error is related to the NVIDIA GPU with 2 GB of VRAM: try to run the command inference.py --input 1.wav --gpu 0 --window_size 384

AlbyTree avatar Jun 09 '20 17:06 AlbyTree

In any case, probably 2GB will not be enough. In my case VRAM consumption is 2560MB.

In my case by using the option --window_size 384 I managed to make it work with 2GB of VRAM (see #24 ) so the VRAM size might not be the problem.

I just installed everything and tried the script. I have 2 GPUs, the internal one is 1GB and NVIDIA 2GB.

What you mean "internal one" ? An integrated graphic GPU ? If so, I'm pretty sure CUDA isn't available for integrated graphics GPUs so CUDA detects only 1 GPU (indexed with '0'): try running with python the commands

import torch
torch.cuda.device_count()

and it should print "1" (i.e. 1 NVIDIA GPU found). In that case, the out of memory error is related to the NVIDIA GPU with 2 GB of VRAM: try to run the command inference.py --input 1.wav --gpu 0 --window_size 384

I noticed that changing the window size greatly reduces conversion quality for me.

TRvlvr avatar Jul 01 '20 00:07 TRvlvr