PaddleOCR
PaddleOCR copied to clipboard
RuntimeError: (PreconditionNotMet) Cannot load cudnn shared library. Cannot invoke method cudnnGetVersion. [Hint: cudnn_dso_handle should not be null.] (at /paddle/paddle/phi/backends/dynload/cudnn.cc:64)
请提供下述完整信息以便快速定位问题/Please provide the following information to quickly locate the problem
- 系统环境/System Environment: Linux 20.0.4
- 版本号/Version:Paddle: PaddleOCR: 问题相关组件/Related components: gpu: NVIDIA-RTX A2000 12GB, nvcc 10.1, driver version - 535.161.07
- 运行指令/Command Code:result = ocr.ocr(img_path, cls=True)
- 完整报错/Complete Error Message:>>> img_path = '/home/usr/Documents/backend/image/a_picked.jpg'
result = ocr.ocr(img_path, cls=True) Traceback (most recent call last): File "
", line 1, in File "/home/usr/anaconda3/envs/env/lib/python3.8/site-packages/paddleocr/paddleocr.py", line 661, in ocr dt_boxes, rec_res, _ = self.call(img, cls) File "/home/usr/anaconda3/envs/env/lib/python3.8/site-packages/paddleocr/tools/infer/predict_system.py", line 76, in call dt_boxes, elapse = self.text_detector(img) File "/home/usr/anaconda3/envs/env/lib/python3.8/site-packages/paddleocr/tools/infer/predict_det.py", line 244, in call self.input_tensor.copy_from_cpu(img) File "/home/usr/anaconda3/envs/env/lib/python3.8/site-packages/paddle/inference/wrapper.py", line 52, in tensor_copy_from_cpu self._copy_from_cpu_bind(data) RuntimeError: (PreconditionNotMet) Cannot load cudnn shared library. Cannot invoke method cudnnGetVersion. [Hint: cudnn_dso_handle should not be null.] (at /paddle/paddle/phi/backends/dynload/cudnn.cc:64)
请尽量不要包含图片在问题中/Please try to not include the image in the issue.
请问你这是在运行什么命令或者做什么操作的时候报错的
Getting same error when calling the model in concurrently also when the Memory gets consumed and not released.
same cuda 12.4 [Hint: cudnn_dso_handle should not be null.] (at /paddle/paddle/phi/backends/dynload/cudnn.cc:64)
对系统cuda11.8的支持非常好,一次性就能安装成功,对新的cuda12.4支持不友好,各种试错,最终大概步骤如下,不一定保真,哈哈哈
1.CUDA11.8 包含 cuDNN 动态链接库的 PaddlePaddle,需要先使用如下命令将 wheel 包下载到本地,再使用python3 -m pip install [name].whl
本地安装([name]为 wheel 包名称):
https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html
2.确保环境中安装有以下包:(版本号仅供参考) nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi nvidia-curand-cu12 10.3.2.106 pypi_0 pypi nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi nvidia-nccl-cu12 2.20.5 pypi_0 pypi nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
3.如安装有paddleocr,请卸载(好像用不着)
4.如果已经安装paddlepaddle-gpu,重装为2.6.0的版本
5.设置以下缺失的动态库: libcublas.so libcudnn.so libcudnn_cnn_infer.so.8 libcudnn_ops_infer.so.8
※ 确保系统环境变量设置为: cuda-12.4为对象的cuda版本 export PATH=/usr/local/cuda/bin:/usr/local/cuda-12.4/bin:/usr/local/cuda-12.4/compute-sanitizer${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64{$LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} export LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH
※ 根据上面的四个动态库,依次执行下面操作步骤: ① 定位系统中已存在的动态库 locate libcublas.so locate libcudnn.so locate libcudnn_cnn_infer.so.8 locate libcudnn_ops_infer.so.8 ② 软连接到/usr/lib/路径下 sudo ln -s 源路径xxx /usr/lib/libcublas.so sudo ln -s 源路径xxx /usr/lib/libcudnn.so sudo ln -s 源路径xxx /usr/lib/libcudnn_cnn_infer.so.8 sudo ln -s 源路径xxx /usr/lib/libcudnn_ops_infer.so.8
6.进入项目,开启日志: 注释config.disable_glog_info()即可 查看报错日志,并调试
希望以上操作能帮上您!