tensorrt_demos
tensorrt_demos copied to clipboard
Cudnn initialization error: I already installed Cudnn but Why?
I'm using jetson AGX Orin for my project. I installed jetpack 5.1.2 CUDA 11.4, CUDNN 8.6.0 and TensorRT 8.5.2.2 with nvidia-sdkmanager.
While I follow the instruction of this project, the error below occured.
dkim@ubuntu:~/tensorrt_demos/yolo$ python3 onnx_to_tensorrt.py -m yolov4-416
Loading the ONNX file...
Adding yolo_layer plugins.
Adding a concatenated output as "detections".
Naming the input tensort as "input".
Building the TensorRT engine. This would take a while...
(Use "--verbose" or "-v" to enable verbose logging.)
onnx_to_tensorrt.py:146: DeprecationWarning: Use network created with NetworkDefinitionCreationFlag::EXPLICIT_BATCH flag instead.
builder.max_batch_size = MAX_BATCH_SIZE
onnx_to_tensorrt.py:148: DeprecationWarning: Use set_memory_pool_limit instead.
config.max_workspace_size = 1 << 30
onnx_to_tensorrt.py:170: DeprecationWarning: Use build_serialized_network instead.
engine = builder.build_engine(network, config)
[09/21/2023-18:18:18] [TRT] [E] 1: [executionResources.cpp::setTacticSources::178] Error Code 1: Cudnn (Could not initialize cudnn, please check cudnn installation.)
ERROR: failed to build the TensorRT engine!
But cudnn is installed correctly like I said before. Furthermore, I downloaded cudnn library files again in my cuda folder. (/usr/local/cuda/include and /usr/local/cuda/lib64)
They can be found in apt list
dkim@ubuntu:/usr/local/cuda/include$ sudo apt list | grep libcudnn
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
libcudnn8-dev/stable,now 8.6.0.166-1+cuda11.4 arm64 [installed]
libcudnn8-samples/stable,now 8.6.0.166-1+cuda11.4 arm64 [installed]
libcudnn8/stable,now 8.6.0.166-1+cuda11.4 arm64 [installed]
Please anyone tell me how I can solve this problem. Thank you.