mmdeploy icon indicating copy to clipboard operation
mmdeploy copied to clipboard

RuntimeError: Failed to parse onnx, In node 423 (importTopK): UNSUPPORTED_NODE: Assertion failed: (inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."

Open jiaqizhang123-stack opened this issue 1 year ago • 2 comments

Hello,when i use " python tools/onnx2tensorrt.py /home/zhang/mmdeploy/configs/mmdet/instance-seg/instance-seg_tensorrt_dynamic-320x320-1344x1344.py /home/zhang/onnx/end2end.onnx /home/zhang/onnx-trt --device-id 0 --log-level INFO" the error is 2022-07-29 11:06:09,365 - mmdeploy - INFO - onnx2tensorrt: onnx_path: /home/zhang/onnx/end2end.onnx deploy_cfg: /home/zhang/mmdeploy/configs/mmdet/instance-seg/instance-seg_tensorrt_dynamic-320x320-1344x1344.py 2022-07-29 11:06:09,410 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/zhang/anaconda3/envs/mmdeploy/lib/python3.9/site-packages/mmdeploy/lib/libmmdeploy_tensorrt_ops.so [07/29/2022-11:06:09] [TRT] [I] [MemUsageChange] Init CUDA: CPU +152, GPU +0, now: CPU 214, GPU 346 (MiB) [07/29/2022-11:06:09] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 214 MiB, GPU 346 MiB [07/29/2022-11:06:09] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 234 MiB, GPU 346 MiB [07/29/2022-11:06:09] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [07/29/2022-11:06:09] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped Traceback (most recent call last): File "/home/zhang/mmdeploy/tools/onnx2tensorrt.py", line 73, in main() File "/home/zhang/mmdeploy/tools/onnx2tensorrt.py", line 58, in main from_onnx( File "/home/zhang/anaconda3/envs/mmdeploy/lib/python3.9/site-packages/mmdeploy/backend/tensorrt/utils.py", line 113, in from_onnx raise RuntimeError(f'Failed to parse onnx, {error_msgs}') RuntimeError: Failed to parse onnx, In node 423 (importTopK): UNSUPPORTED_NODE: Assertion failed: (inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."

the onnx is by "python tools/torch2onnx.py \

${MMDEPLOY_DIR}/configs/mmdet/instance-seg/instance-seg_onnxruntime_dynamic.py \
${MMDET_DIR}/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py \
${CHECKPOINT_DIR}/epoch_36.pth \
/home/zhang/mmdeploy/0_0152.bmp \
--work-dir ${WORK_DIR} \
--device cuda:0 \
--log-level INFO"

(mmdeploy) zhang@zhang-QiTianM540-A739:~/mmdeploy$ python tools/check_env.py 2022-07-29 11:05:00,379 - mmdeploy - INFO -

2022-07-29 11:05:00,379 - mmdeploy - INFO - Environmental information fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git [...] -- [...]' 2022-07-29 11:05:01,198 - mmdeploy - INFO - sys.platform: linux 2022-07-29 11:05:01,199 - mmdeploy - INFO - Python: 3.9.12 (main, Jun 1 2022, 11:38:51) [GCC 7.5.0] 2022-07-29 11:05:01,199 - mmdeploy - INFO - CUDA available: True 2022-07-29 11:05:01,199 - mmdeploy - INFO - GPU 0: NVIDIA GeForce GTX 1050 Ti 2022-07-29 11:05:01,199 - mmdeploy - INFO - CUDA_HOME: /usr/local/cuda-10.2 2022-07-29 11:05:01,199 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 10.2, V10.2.89 2022-07-29 11:05:01,199 - mmdeploy - INFO - GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 2022-07-29 11:05:01,199 - mmdeploy - INFO - PyTorch: 1.8.0 2022-07-29 11:05:01,199 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 10.2
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
  • CuDNN 7.6.5
  • Magma 2.5.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

2022-07-29 11:05:01,199 - mmdeploy - INFO - TorchVision: 0.9.0 2022-07-29 11:05:01,199 - mmdeploy - INFO - OpenCV: 4.5.3 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMCV: 1.4.0 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMCV Compiler: GCC 7.3 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMCV CUDA Compiler: 10.2 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMDeploy: 0.5.0+HEAD 2022-07-29 11:05:01,199 - mmdeploy - INFO -

2022-07-29 11:05:01,199 - mmdeploy - INFO - Backend information 2022-07-29 11:05:01,523 - mmdeploy - INFO - onnxruntime: 1.8.1 ops_is_avaliable : True 2022-07-29 11:05:01,538 - mmdeploy - INFO - tensorrt: 8.2.3.0 ops_is_avaliable : True 2022-07-29 11:05:01,547 - mmdeploy - INFO - ncnn: None ops_is_avaliable : False 2022-07-29 11:05:01,547 - mmdeploy - INFO - pplnn_is_avaliable: False 2022-07-29 11:05:01,548 - mmdeploy - INFO - openvino_is_avaliable: False 2022-07-29 11:05:01,548 - mmdeploy - INFO -

2022-07-29 11:05:01,548 - mmdeploy - INFO - Codebase information 2022-07-29 11:05:01,548 - mmdeploy - INFO - mmdet: 2.25.0 2022-07-29 11:05:01,548 - mmdeploy - INFO - mmseg: None 2022-07-29 11:05:01,548 - mmdeploy - INFO - mmcls: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmocr: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmedit: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmdet3d: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmpose: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmrotate: None

jiaqizhang123-stack avatar Jul 29 '22 03:07 jiaqizhang123-stack

The ONNX file should also be generated by the TensorRT configs other than ONNXRuntime configs.

AllentDan avatar Jul 29 '22 04:07 AllentDan

Even if it is fine on another computer, that does not mean it is right. Like I said, if you want to use ONNXRuntime to do the inference, just use ORT configs. If you want to use TensorRT engines, just use TensorRT configs.

AllentDan avatar Jul 29 '22 07:07 AllentDan