OpenVoice icon indicating copy to clipboard operation
OpenVoice copied to clipboard

Kernel died

Open mortsnort opened this issue 9 months ago • 10 comments

I always get a "The kernel for OpenVoice/demo_part3.ipynb appears to have died. It will restart automatically." when running the demopart3 jupyter notebook in a linux environment. This happens during the ### Obtain Tone Color Embedding cell. I have 16GB VRAM. Is this not enough for voice cloning? It would be helpful if the system hardware (and Torch version) requirements were documented.

mortsnort avatar May 03 '24 21:05 mortsnort

Same issue. openVoise v1 is OK, but v2:

Loaded checkpoint 'checkpoints_v2/converter/checkpoint.pth'
missing/unexpected keys: [] []
OpenVoice version: v2
Could not load library libcudnn_cnn_infer.so.8. Error: libcudnn_cnn_infer.so.8: cannot open shared object file: No such file or directory
Please make sure libcudnn_cnn_infer.so.8 is in your library path!

hungtooc avatar May 05 '24 04:05 hungtooc

Same issue here with demopart3 on Windows

4ssil avatar May 06 '24 12:05 4ssil

I just ran into the same issue. Here's how I resolved it (on Ubuntu 20.04 w Python 3.10 using a python virtual environment (venv)):

  • context: PyTorch already comes bundled with cuDNN. One option to resolving this error is to ensure PyTorch can find the bundled cuDNN. The error above indicates that your LD_LIBRARY_PATH doesn't point to the location that contains the bundled cuDNN library.
  • locate the file: e.g., via $ find ~ -name "libcudnn_cnn_infer.so.8", which pointed me to /home/xxx/dev/openvoicev2_venv/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8
  • set LD_LIBRARY_PATH: e.g., via $ export LD_LIBRARY_PATH=/home/xxx/dev/openvoicev2_venv/lib/python3.10/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH
  • afterwards, I can run the demo script on the command line as well as in jupyter-lab w/o errors.

MarkuzK avatar May 06 '24 15:05 MarkuzK

Same issue here on Ubuntu

Loaded checkpoint 'checkpoints_v2/converter/checkpoint.pth'
missing/unexpected keys: [] []

Setting LD_LIBRARY_PATH did not to help

salvador-blanco avatar May 14 '24 17:05 salvador-blanco

I'm experiencing the same thing on my WSL2 Ubuntu environment. When I run the code below, it restarts the kernel. I'm currently using an RTX 4090 with 24GB Vram, so I don't think it's a hardware issue.

reference_speaker = 'resources/example_reference.mp3' # This is the voice you want to clone target_se, audio_name = se_extractor.get_se(reference_speaker, tone_color_converter, vad=False)

Hangsiin avatar May 20 '24 16:05 Hangsiin

I don't know if this might help anyone here, but I kept getting the error "Could not load library cudnn_ops_infer64_8.dll. Error code 126" when trying to run the code in the demopart3 jupyter notebook as a .py-file. If anyone has that Problem too, I was able to solve the error by doing the following:

  • I downloaded the zip-file for cuDNN v8.9.7 from https://developer.nvidia.com/rdp/cudnn-archive#a-collapse897-120
  • I just extracted the files "cudnn_cnn_infer64_8.dll" and "cudnn_ops_infer64_8.dll" from the \bin folder of that zip into the torch-folder of my venv for the project (.venv\Lib\site-packages\torch\lib).

Seems like this fixed it for me (using it on cpu) and thought, I'll post it here, if anyone runs into the same issue.

FlexTestHD avatar Jun 04 '24 23:06 FlexTestHD

To fix this on windows:

  1. Go to the NVIDIA cuDNN download page
  2. Select the appropriate version of cuDNN for your cuda version (cuDNN v8.9.7 for CUDA 12.x)
  3. Download the cuDNN installer for win
  4. Extract the contents of the downloaded file
  5. Copy Files to cuda toolkit directory
  6. Navigate to the extracted directory
  7. Copy the following files to your cuda installation directory:
  • Include files: Copy cudnn*.h files from cudnn/include to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\include

  • Library files: Copy cudnn*.lib files from cudnn/lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\lib\x64

  • DLL files: Copy cudnn*.dll files from cudnn/bin to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin

  1. Set Environment Variables
  2. Open the start menu and search for "environment variables"
  3. Click on "Edit the system environment variables"
  4. In the system properties window, click on the "environment variables" button
  5. In the env vars window, find the path var under "system variables" and select it, then click "edit"
  6. Add the following paths to the Path var (if they are not already there):
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\libnvvp
  1. Verify Installation
  2. Open a command prompt and run the following command to verify that CUDA is installed correctly:
nvcc --version

You should see information about the cuda compiler

To verify cuDNN installation, you can write and compile a small cuda program that uses cuDNN, or you can use pytorch to check if it detects cuDNN.

pip install torch

Run a simple script:

pip install torch
print("Is CUDA available: ", torch.cuda.is_available())
print("CUDA version: ", torch.version.cuda)
print("cuDNN version: ", torch.backends.cudnn.version())
print("Number of GPUs: ", torch.cuda.device_count())

If it detects your gpu and lists it, then your cuDNN installation is successful

vladlearns avatar Jun 06 '24 20:06 vladlearns

@vladlearns Thank you very much for the detailed instructions!

FlexTestHD avatar Jun 06 '24 20:06 FlexTestHD

edit: Don't forget: cuda 12.0 and cuDNN 8.9.7 with pytorch, you need to install a version of pytorch that is built for cuda 12.0. pytorch provides binaries for specific CUDA versions, so you need to specify the correct version when installing.

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121

If you want to run in the current session. You need to manually update the Path variable for the current session to prioritize new cuda paths

$currentPath = [System.Environment]::GetEnvironmentVariable("Path", [System.EnvironmentVariableTarget]::Machine)

$newCudaPaths = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\libnvvp;"

# Remove old (if any)
$currentPath = $currentPath -replace "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v\d+\.\d+\\bin;",""
$currentPath = $currentPath -replace "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v\d+\.\d+\\libnvvp;",""

# Combine the new paths with the current
$updatedPath = $newCudaPaths + $currentPath

# Update
[System.Environment]::SetEnvironmentVariable("Path", $updatedPath, [System.EnvironmentVariableTarget]::Process)

echo $env:Path

image

vladlearns avatar Jun 06 '24 21:06 vladlearns

Made a pull, solving all of this in Docker: https://github.com/myshell-ai/OpenVoice/pull/264 @Hangsiin @salvador-blanco @mortsnort @hungtooc @4ssil @FlexTestHD, feel free to test

vladlearns avatar Jun 07 '24 13:06 vladlearns

To fix this on windows:

  1. Go to the NVIDIA cuDNN download page
  2. Select the appropriate version of cuDNN for your cuda version (cuDNN v8.9.7 for CUDA 12.x)
  3. Download the cuDNN installer for win
  4. Extract the contents of the downloaded file
  5. Copy Files to cuda toolkit directory
  6. Navigate to the extracted directory
  7. Copy the following files to your cuda installation directory:
  • Include files: Copy cudnn*.h files from cudnn/include to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\include
  • Library files: Copy cudnn*.lib files from cudnn/lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\lib\x64
  • DLL files: Copy cudnn*.dll files from cudnn/bin to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin
  1. Set Environment Variables
  2. Open the start menu and search for "environment variables"
  3. Click on "Edit the system environment variables"
  4. In the system properties window, click on the "environment variables" button
  5. In the env vars window, find the path var under "system variables" and select it, then click "edit"
  6. Add the following paths to the Path var (if they are not already there):
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\libnvvp
  1. Verify Installation
  2. Open a command prompt and run the following command to verify that CUDA is installed correctly:
nvcc --version

You should see information about the cuda compiler

To verify cuDNN installation, you can write and compile a small cuda program that uses cuDNN, or you can use pytorch to check if it detects cuDNN.

pip install torch

Run a simple script:

pip install torch
print("Is CUDA available: ", torch.cuda.is_available())
print("CUDA version: ", torch.version.cuda)
print("cuDNN version: ", torch.backends.cudnn.version())
print("Number of GPUs: ", torch.cuda.device_count())

If it detects your gpu and lists it, then your cuDNN installation is successful

Works like a charm! Thanks so much!

taniyow avatar Aug 26 '24 03:08 taniyow

No worries! I'm happy to help! @mortsnort may we close this one? Looks like it is resolved

vladlearns avatar Aug 26 '24 17:08 vladlearns

yes

On Mon, Aug 26, 2024 at 10:43 AM Vladyslav Tkachenko < @.***> wrote:

No worries! I'm happy to help! @mortsnort https://github.com/mortsnort may we close this one? Looks like it is resolved

— Reply to this email directly, view it on GitHub https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2310734126, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGGR2IKVBSPXZCHE5VFNASTZTNSL7AVCNFSM6AAAAABHGFYDX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJQG4ZTIMJSGY . You are receiving this because you were mentioned.Message ID: @.***>

mortsnort avatar Aug 26 '24 17:08 mortsnort

@mortsnort You are the only one who can close it, my friend. For those, who will be looking for a solution after this issue will be closed. Here is the link with a solution/fix: https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2153388034, also check https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2153417117 for additional info

yes On Mon, Aug 26, 2024 at 10:43 AM Vladyslav Tkachenko < @.> wrote: No worries! I'm happy to help! @mortsnort https://github.com/mortsnort may we close this one? Looks like it is resolved — Reply to this email directly, view it on GitHub <#215 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGGR2IKVBSPXZCHE5VFNASTZTNSL7AVCNFSM6AAAAABHGFYDX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJQG4ZTIMJSGY . You are receiving this because you were mentioned.Message ID: @.>

vladlearns avatar Aug 29 '24 10:08 vladlearns