OpenVoice
OpenVoice copied to clipboard
Kernel died
I always get a "The kernel for OpenVoice/demo_part3.ipynb appears to have died. It will restart automatically." when running the demopart3 jupyter notebook in a linux environment. This happens during the ### Obtain Tone Color Embedding cell. I have 16GB VRAM. Is this not enough for voice cloning? It would be helpful if the system hardware (and Torch version) requirements were documented.
Same issue. openVoise v1 is OK, but v2:
Loaded checkpoint 'checkpoints_v2/converter/checkpoint.pth'
missing/unexpected keys: [] []
OpenVoice version: v2
Could not load library libcudnn_cnn_infer.so.8. Error: libcudnn_cnn_infer.so.8: cannot open shared object file: No such file or directory
Please make sure libcudnn_cnn_infer.so.8 is in your library path!
Same issue here with demopart3 on Windows
I just ran into the same issue. Here's how I resolved it (on Ubuntu 20.04 w Python 3.10 using a python virtual environment (venv)):
- context: PyTorch already comes bundled with cuDNN. One option to resolving this error is to ensure PyTorch can find the bundled cuDNN. The error above indicates that your LD_LIBRARY_PATH doesn't point to the location that contains the bundled cuDNN library.
- locate the file: e.g., via
$ find ~ -name "libcudnn_cnn_infer.so.8"
, which pointed me to/home/xxx/dev/openvoicev2_venv/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8
- set LD_LIBRARY_PATH: e.g., via
$ export LD_LIBRARY_PATH=/home/xxx/dev/openvoicev2_venv/lib/python3.10/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH
- afterwards, I can run the demo script on the command line as well as in jupyter-lab w/o errors.
Same issue here on Ubuntu
Loaded checkpoint 'checkpoints_v2/converter/checkpoint.pth'
missing/unexpected keys: [] []
Setting LD_LIBRARY_PATH did not to help
I'm experiencing the same thing on my WSL2 Ubuntu environment. When I run the code below, it restarts the kernel. I'm currently using an RTX 4090 with 24GB Vram, so I don't think it's a hardware issue.
reference_speaker = 'resources/example_reference.mp3' # This is the voice you want to clone target_se, audio_name = se_extractor.get_se(reference_speaker, tone_color_converter, vad=False)
I don't know if this might help anyone here, but I kept getting the error "Could not load library cudnn_ops_infer64_8.dll. Error code 126" when trying to run the code in the demopart3 jupyter notebook as a .py-file. If anyone has that Problem too, I was able to solve the error by doing the following:
- I downloaded the zip-file for cuDNN v8.9.7 from https://developer.nvidia.com/rdp/cudnn-archive#a-collapse897-120
- I just extracted the files "cudnn_cnn_infer64_8.dll" and "cudnn_ops_infer64_8.dll" from the \bin folder of that zip into the torch-folder of my venv for the project (.venv\Lib\site-packages\torch\lib).
Seems like this fixed it for me (using it on cpu) and thought, I'll post it here, if anyone runs into the same issue.
To fix this on windows:
- Go to the NVIDIA cuDNN download page
- Select the appropriate version of cuDNN for your cuda version (cuDNN v8.9.7 for CUDA 12.x)
- Download the cuDNN installer for win
- Extract the contents of the downloaded file
- Copy Files to cuda toolkit directory
- Navigate to the extracted directory
- Copy the following files to your cuda installation directory:
-
Include files: Copy cudnn*.h files from cudnn/include to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\include
-
Library files: Copy cudnn*.lib files from cudnn/lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\lib\x64
-
DLL files: Copy cudnn*.dll files from cudnn/bin to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin
- Set Environment Variables
- Open the start menu and search for "environment variables"
- Click on "Edit the system environment variables"
- In the system properties window, click on the "environment variables" button
- In the env vars window, find the path var under "system variables" and select it, then click "edit"
- Add the following paths to the Path var (if they are not already there):
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\libnvvp
- Verify Installation
- Open a command prompt and run the following command to verify that CUDA is installed correctly:
nvcc --version
You should see information about the cuda compiler
To verify cuDNN installation, you can write and compile a small cuda program that uses cuDNN, or you can use pytorch to check if it detects cuDNN.
pip install torch
Run a simple script:
pip install torch
print("Is CUDA available: ", torch.cuda.is_available())
print("CUDA version: ", torch.version.cuda)
print("cuDNN version: ", torch.backends.cudnn.version())
print("Number of GPUs: ", torch.cuda.device_count())
If it detects your gpu and lists it, then your cuDNN installation is successful
@vladlearns Thank you very much for the detailed instructions!
edit: Don't forget: cuda 12.0 and cuDNN 8.9.7 with pytorch, you need to install a version of pytorch that is built for cuda 12.0. pytorch provides binaries for specific CUDA versions, so you need to specify the correct version when installing.
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
If you want to run in the current session. You need to manually update the Path variable for the current session to prioritize new cuda paths
$currentPath = [System.Environment]::GetEnvironmentVariable("Path", [System.EnvironmentVariableTarget]::Machine)
$newCudaPaths = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\libnvvp;"
# Remove old (if any)
$currentPath = $currentPath -replace "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v\d+\.\d+\\bin;",""
$currentPath = $currentPath -replace "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v\d+\.\d+\\libnvvp;",""
# Combine the new paths with the current
$updatedPath = $newCudaPaths + $currentPath
# Update
[System.Environment]::SetEnvironmentVariable("Path", $updatedPath, [System.EnvironmentVariableTarget]::Process)
echo $env:Path
Made a pull, solving all of this in Docker: https://github.com/myshell-ai/OpenVoice/pull/264 @Hangsiin @salvador-blanco @mortsnort @hungtooc @4ssil @FlexTestHD, feel free to test
To fix this on windows:
- Go to the NVIDIA cuDNN download page
- Select the appropriate version of cuDNN for your cuda version (cuDNN v8.9.7 for CUDA 12.x)
- Download the cuDNN installer for win
- Extract the contents of the downloaded file
- Copy Files to cuda toolkit directory
- Navigate to the extracted directory
- Copy the following files to your cuda installation directory:
- Include files: Copy cudnn*.h files from cudnn/include to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\include
- Library files: Copy cudnn*.lib files from cudnn/lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\lib\x64
- DLL files: Copy cudnn*.dll files from cudnn/bin to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin
- Set Environment Variables
- Open the start menu and search for "environment variables"
- Click on "Edit the system environment variables"
- In the system properties window, click on the "environment variables" button
- In the env vars window, find the path var under "system variables" and select it, then click "edit"
- Add the following paths to the Path var (if they are not already there):
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.x\libnvvp
- Verify Installation
- Open a command prompt and run the following command to verify that CUDA is installed correctly:
nvcc --version
You should see information about the cuda compiler
To verify cuDNN installation, you can write and compile a small cuda program that uses cuDNN, or you can use pytorch to check if it detects cuDNN.
pip install torch
Run a simple script:
pip install torch print("Is CUDA available: ", torch.cuda.is_available()) print("CUDA version: ", torch.version.cuda) print("cuDNN version: ", torch.backends.cudnn.version()) print("Number of GPUs: ", torch.cuda.device_count())
If it detects your gpu and lists it, then your cuDNN installation is successful
Works like a charm! Thanks so much!
No worries! I'm happy to help! @mortsnort may we close this one? Looks like it is resolved
yes
On Mon, Aug 26, 2024 at 10:43 AM Vladyslav Tkachenko < @.***> wrote:
No worries! I'm happy to help! @mortsnort https://github.com/mortsnort may we close this one? Looks like it is resolved
— Reply to this email directly, view it on GitHub https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2310734126, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGGR2IKVBSPXZCHE5VFNASTZTNSL7AVCNFSM6AAAAABHGFYDX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJQG4ZTIMJSGY . You are receiving this because you were mentioned.Message ID: @.***>
@mortsnort You are the only one who can close it, my friend. For those, who will be looking for a solution after this issue will be closed. Here is the link with a solution/fix: https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2153388034, also check https://github.com/myshell-ai/OpenVoice/issues/215#issuecomment-2153417117 for additional info
yes … On Mon, Aug 26, 2024 at 10:43 AM Vladyslav Tkachenko < @.> wrote: No worries! I'm happy to help! @mortsnort https://github.com/mortsnort may we close this one? Looks like it is resolved — Reply to this email directly, view it on GitHub <#215 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGGR2IKVBSPXZCHE5VFNASTZTNSL7AVCNFSM6AAAAABHGFYDX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJQG4ZTIMJSGY . You are receiving this because you were mentioned.Message ID: @.>