Solov2-TensorRT-CPP
Solov2-TensorRT-CPP copied to clipboard
Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::322, condition: bindings[x] != nullptr
When i run "./build/segment ./config/config.yaml", i get an Error "[E] [TRT] 3: [executionContext.cpp::enqueueInternal::322] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::322, condition: bindings[x] != nullptr)", and what maybe the reason? If i get the right ONNX and tensorrt_model_bin? This is my output information of running "build_model" and "demo":
./build/build_model ./config/config.yaml
~/Solov2-TensorRT-CPP/cmake-build-debug/build_model ./config/config.yaml
config_file:./config/config.yaml
createInferBuilder
[05/25/2022-22:57:19] [I] [TRT] [MemUsageChange] Init CUDA: CPU +299, GPU +0, now: CPU 301, GPU 309 (MiB)
createNetwork
createBuilderConfig
createParser
parseFromFile:~/Solov2-TensorRT-CPP/ONNX/SOLOv2_light_R34.onnx
[05/25/2022-22:57:19] [I] [TRT] ----------------------------------------------------------------
[05/25/2022-22:57:19] [I] [TRT] Input filename: ~/Solov2-TensorRT-CPP/ONNX/SOLOv2_light_R34.onnx
[05/25/2022-22:57:19] [I] [TRT] ONNX IR version: 0.0.4
[05/25/2022-22:57:19] [I] [TRT] Opset version: 11
[05/25/2022-22:57:19] [I] [TRT] Producer name: pytorch
[05/25/2022-22:57:19] [I] [TRT] Producer version: 1.3
[05/25/2022-22:57:19] [I] [TRT] Domain:
[05/25/2022-22:57:19] [I] [TRT] Model version: 0
[05/25/2022-22:57:19] [I] [TRT] Doc string:
[05/25/2022-22:57:19] [I] [TRT] ----------------------------------------------------------------
[05/25/2022-22:57:19] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
input shape:input (1, 3, 384, 1152)
output shape:cate_pred (3872, 80)
enableDLA
buildEngineWithConfig
[05/25/2022-22:57:20] [I] [TRT] [MemUsageSnapshot] Builder begin: CPU 664 MiB, GPU 671 MiB
[05/25/2022-22:57:21] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +70, GPU +68, now: CPU 822, GPU 1012 (MiB)
[05/25/2022-22:57:21] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 822, GPU 1022 (MiB)
[05/25/2022-22:57:21] [W] [TRT] Detected invalid timing cache, setup a local cache instead
[05/25/2022-22:57:24] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[05/25/2022-22:58:39] [I] [TRT] Detected 1 inputs and 13 output network tensors.
[05/25/2022-22:58:39] [I] [TRT] Total Host Persistent Memory: 274640
[05/25/2022-22:58:39] [I] [TRT] Total Device Persistent Memory: 83921920
[05/25/2022-22:58:39] [I] [TRT] Total Scratch Memory: 0
[05/25/2022-22:58:39] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 158 MiB, GPU 675 MiB
[05/25/2022-22:58:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 1298, GPU 1635 (MiB)
[05/25/2022-22:58:39] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 1298, GPU 1643 (MiB)
[05/25/2022-22:58:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1298, GPU 1627 (MiB)
[05/25/2022-22:58:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1297, GPU 1611 (MiB)
[05/25/2022-22:58:39] [I] [TRT] [MemUsageSnapshot] Builder end: CPU 1210 MiB, GPU 1381 MiB
serializeModel
done
Process finished with exit code 0
./build/demo ./config/config.yaml
~/Solov2-TensorRT-CPP/cmake-build-debug/segment ./config/config.yaml
config_file:./config/config.yaml
[05/25/2022-23:35:10] [I] [TRT] [MemUsageChange] Init CUDA: CPU +298, GPU +0, now: CPU 411, GPU 309 (MiB)
[05/25/2022-23:35:11] [I] [TRT] Loaded engine size: 81 MB
[05/25/2022-23:35:11] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 411 MiB, GPU 309 MiB
[05/25/2022-23:35:22] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +2140, GPU +980, now: CPU 2804, GPU 1731 (MiB)
[05/25/2022-23:35:22] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 2804, GPU 1741 (MiB)
[05/25/2022-23:35:22] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 2804, GPU 1725 (MiB)
[05/25/2022-23:35:22] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 2804 MiB, GPU 1725 MiB
[05/25/2022-23:35:22] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 2804 MiB, GPU 1725 MiB
[05/25/2022-23:35:22] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 2804, GPU 1733 (MiB)
[05/25/2022-23:35:22] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 2804, GPU 1741 (MiB)
[05/25/2022-23:35:22] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 2811 MiB, GPU 2166 MiB
[05/25/2022-23:35:23] [E] [TRT] 3: [executionContext.cpp::enqueueInternal::322] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::322, condition: bindings[x] != nullptr
)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: invalid argument
Exception raised from getDeviceFromPtr at ../aten/src/ATen/cuda/CUDADevice.h:13 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
According to "Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::322, condition: bindings[x] != nullptr", I guess there some problems with the ONNX model .
Try a predefined model:baidudisk, fetch code:qdsm