GPT-SoVITS
GPT-SoVITS copied to clipboard
ASR推理错误
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
系统:ubantu cuda:12.3 torch:2.1.1
cudnn安了吗,没写到感觉就是没有安装
安装后报错: Traceback (most recent call last): File "/home/carey/GPT-SoVITS/tools/asr/fasterwhisper_asr.py", line 77, in execute_asr for segment in segments: File "/home/carey/anaconda3/envs/voice/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 1106, in restore_speech_timestamps for segment in segments: File "/home/carey/anaconda3/envs/voice/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 511, in generate_segments encoder_output = self.encode(segment) File "/home/carey/anaconda3/envs/voice/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 762, in encode return self.model.encode(features, to_cpu=to_cpu) RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
但是中文可以正常运行,我用的多语种模型的日语就会报这个错
GPU:tesla M40
noavx
达摩可以运行,但是fastwhisper不行
LD_LIBRARY_PATH加上环境对应的libcudnn_ops_infer.so.8路径就好了
LD_LIBRARY_PATH加上环境对应的libcudnn_ops_infer.so.8路径就好了
加上了过后,达摩能够使用,但是多语种那个不能使用,报错: Traceback (most recent call last): File "/home/carey/GPT-SoVITS/tools/asr/fasterwhisper_asr.py", line 77, in execute_asr for segment in segments: File "/home/carey/anaconda3/envs/voice/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 1106, in restore_speech_timestamps for segment in segments: File "/home/carey/anaconda3/envs/voice/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 511, in generate_segments encoder_output = self.encode(segment) File "/home/carey/anaconda3/envs/voice/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 762, in encode return self.model.encode(features, to_cpu=to_cpu) RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
M40 可以用 cuda12 吗?感觉会有兼容性问题. 报错前面有没有出现半精度的警告, 有的话把 config.py 中的 is_half 改了, 这卡应该不支持半精度运算.
M40 可以用 cuda12 吗?感觉会有兼容性问题. 报错前面有没有出现半精度的警告, 有的话把 config.py 中的 is_half 改了, 这卡应该不支持半精度运算. Nvidia官网下载的Driver,支持cuda12,如果是cuda问题,那么达摩应该也跑不了啊,但是我可以运行。出现了is_half的问题,我已经改过了is_half
可能的解法
- 查看是否存在libcudnn_ops_infer.so.8;
find / -type f -name "*ibcudnn_cnn_infer*" -o -type d -name "*ibcudnn_cnn_infer*"
结果发现在这里:/usr/local/lib/python3.9/site-packages/torch/lib/libcudnn_cnn_infer.so.8 2. 添加路径,执行:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.9/site-packages/torch/lib/
我这边就ok了
可能的解法
- 查看是否存在libcudnn_ops_infer.so.8;
find / -type f -name "*ibcudnn_cnn_infer*" -o -type d -name "*ibcudnn_cnn_infer*"结果发现在这里:/usr/local/lib/python3.9/site-packages/torch/lib/libcudnn_cnn_infer.so.8 2. 添加路径,执行:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/python3.9/site-packages/torch/lib/我这边就ok了
我这边使用 find 命令找不到 ibcudnn_cnn_infer,使用如下代码获取的 cudnn 路径
import os
import nvidia
print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))
然后在就是添加环境变量
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/user/miniconda3/envs/GPTSoVits/lib/python3.10/site-packages/nvidia/cublas/lib:/home/user/miniconda3/envs/GPTSoVits/lib/python3.10/site-packages/nvidia/cudnn/lib
问题解决。 相同的问题在 faster-whisper 项目的 issues 中也有发现 https://github.com/SYSTRAN/faster-whisper/issues/516
如果提示提示无法找到名为 nvidia 的模块 则需要运行以下命令以安装 nvidia-pyindex 和 CUDA 相关的库:
pip install nvidia-pyindex
pip install nvidia-cublas-cu11
pip install nvidia-cudnn-cu11