mmdeploy
mmdeploy copied to clipboard
RuntimeError: Failed to parse onnx, In node 423 (importTopK): UNSUPPORTED_NODE: Assertion failed: (inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."
Hello,when i use " python tools/onnx2tensorrt.py /home/zhang/mmdeploy/configs/mmdet/instance-seg/instance-seg_tensorrt_dynamic-320x320-1344x1344.py /home/zhang/onnx/end2end.onnx /home/zhang/onnx-trt --device-id 0 --log-level INFO"
the error is
2022-07-29 11:06:09,365 - mmdeploy - INFO - onnx2tensorrt:
onnx_path: /home/zhang/onnx/end2end.onnx
deploy_cfg: /home/zhang/mmdeploy/configs/mmdet/instance-seg/instance-seg_tensorrt_dynamic-320x320-1344x1344.py
2022-07-29 11:06:09,410 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/zhang/anaconda3/envs/mmdeploy/lib/python3.9/site-packages/mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[07/29/2022-11:06:09] [TRT] [I] [MemUsageChange] Init CUDA: CPU +152, GPU +0, now: CPU 214, GPU 346 (MiB)
[07/29/2022-11:06:09] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 214 MiB, GPU 346 MiB
[07/29/2022-11:06:09] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 234 MiB, GPU 346 MiB
[07/29/2022-11:06:09] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/29/2022-11:06:09] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
Traceback (most recent call last):
File "/home/zhang/mmdeploy/tools/onnx2tensorrt.py", line 73, in
the onnx is by "python tools/torch2onnx.py \
${MMDEPLOY_DIR}/configs/mmdet/instance-seg/instance-seg_onnxruntime_dynamic.py \ ${MMDET_DIR}/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py \ ${CHECKPOINT_DIR}/epoch_36.pth \ /home/zhang/mmdeploy/0_0152.bmp \ --work-dir ${WORK_DIR} \ --device cuda:0 \ --log-level INFO"
(mmdeploy) zhang@zhang-QiTianM540-A739:~/mmdeploy$ python tools/check_env.py 2022-07-29 11:05:00,379 - mmdeploy - INFO -
2022-07-29 11:05:00,379 - mmdeploy - INFO - Environmental information
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
2022-07-29 11:05:01,199 - mmdeploy - INFO - TorchVision: 0.9.0 2022-07-29 11:05:01,199 - mmdeploy - INFO - OpenCV: 4.5.3 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMCV: 1.4.0 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMCV Compiler: GCC 7.3 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMCV CUDA Compiler: 10.2 2022-07-29 11:05:01,199 - mmdeploy - INFO - MMDeploy: 0.5.0+HEAD 2022-07-29 11:05:01,199 - mmdeploy - INFO -
2022-07-29 11:05:01,199 - mmdeploy - INFO - Backend information 2022-07-29 11:05:01,523 - mmdeploy - INFO - onnxruntime: 1.8.1 ops_is_avaliable : True 2022-07-29 11:05:01,538 - mmdeploy - INFO - tensorrt: 8.2.3.0 ops_is_avaliable : True 2022-07-29 11:05:01,547 - mmdeploy - INFO - ncnn: None ops_is_avaliable : False 2022-07-29 11:05:01,547 - mmdeploy - INFO - pplnn_is_avaliable: False 2022-07-29 11:05:01,548 - mmdeploy - INFO - openvino_is_avaliable: False 2022-07-29 11:05:01,548 - mmdeploy - INFO -
2022-07-29 11:05:01,548 - mmdeploy - INFO - Codebase information 2022-07-29 11:05:01,548 - mmdeploy - INFO - mmdet: 2.25.0 2022-07-29 11:05:01,548 - mmdeploy - INFO - mmseg: None 2022-07-29 11:05:01,548 - mmdeploy - INFO - mmcls: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmocr: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmedit: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmdet3d: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmpose: None 2022-07-29 11:05:01,549 - mmdeploy - INFO - mmrotate: None
The ONNX file should also be generated by the TensorRT configs other than ONNXRuntime configs.
Even if it is fine on another computer, that does not mean it is right. Like I said, if you want to use ONNXRuntime to do the inference, just use ORT configs. If you want to use TensorRT engines, just use TensorRT configs.