so-vits-svc-fork icon indicating copy to clipboard operation
so-vits-svc-fork copied to clipboard

Torch not compiled with CUDA enabled (on RTX 3060 6gb card with cuda 12.1 installed)

Open syedusama5556 opened this issue 2 years ago • 4 comments

Describe the bug Torch not compiled with CUDA enabled

C:\Users\user1\Documents\Voice-Clonning>svc infer "C:\\Users\\user1\\Documents\\Voice-Clonning\\Waiting_for_Rain_30sec.wav" --speaker "tokaiteio" -c "C:\Users\user1\Documents\Voice-Clonning\AllVoices\Tokai-Teio\config.json" -m "C:\Users\user1\Documents\Voice-Clonning\AllVoices\Tokai-Teio\G_531200.pth" -d cuda
Traceback (most recent call last):
  File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\Scripts\svc.exe\__main__.py", line 7, in <module>
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\click\core.py", line 1130, in __call__    return self.main(*args, **kwargs)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\click\core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\so_vits_svc_fork\__main__.py", line 248, in infer
    infer(
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\so_vits_svc_fork\inference\main.py", line 46, in infer
    svc_model = Svc(
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\so_vits_svc_fork\inference\core.py", line 109, in __init__
    self.hubert_model = utils.get_hubert_model(
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\so_vits_svc_fork\utils.py", line 154, in get_hubert_model
    ).to(device)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\transformers\modeling_utils.py", line 1896, in to
    return super().to(*args, **kwargs)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "C:\Users\user1\Documents\Voice-Clonning\venv\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

when i run the code on gpu by adding -d cuda this error comes and i have a RTX 3060 6gb card with cuda 12.1 installed

To Reproduce

i ran the following command

svc infer "C:\\Users\\user1\\Documents\\Voice-Clonning\\Waiting_for_Rain_30sec.wav" --speaker "tokaiteio" -c "C:\Users\user1\Documents\Voice-Clonning\AllVoices\Tokai-Teio\config.json" -m "C:\Users\user1\Documents\Voice-Clonning\AllVoices\Tokai-Teio\G_531200.pth" -d cuda

Additional context

nvidia-smi result

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.61                 Driver Version: 531.61       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3060 L...  WDDM | 00000000:01:00.0 Off |                  N/A |
| N/A   51C    P0               27W /  N/A|      0MiB /  6144MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

syedusama5556 avatar Apr 23 '23 19:04 syedusama5556

From my understanding CUDA 12.1 is not supported.

Can you try installing CUDA 11.8 and cuDNN for 11.8? (I don't know which version it was again)

Lordmau5 avatar Apr 23 '23 23:04 Lordmau5

Just a note, I personally run this on cuda 12.1 myself. In general, I would expect "Torch not compiled with CUDA enabled" issues to just be a bad install of torch. I would try manually installing torch from https://pytorch.org/ into your conda env. I'm not sure at the moment how reliable the pip installer is at choosing the right torch version, however, since I always manually install my torch.

GarrettConway avatar Apr 30 '23 13:04 GarrettConway

Judging by this issue I don't even think installing CUDA Toolkit and cuDNN is necessary even https://github.com/voicepaw/so-vits-svc-fork/issues/499

So yeah, maybe a reinstall of pytorch could already fix it?

Lordmau5 avatar Apr 30 '23 14:04 Lordmau5

Hello, I'm using RTX 3060 laptop version 3.7.2 and both infer and train works correctly even nvidia-smi says CUDA 12.1.

For this case, follow the PyTorch installation instruction correctly https://pytorch.org/get-started/locally/

pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118

The additional --index-url https://download.pytorch.org/whl/cu118 is the most important.

MikuAuahDark avatar May 01 '23 10:05 MikuAuahDark