convert-pytorch-onnx-tensorrt icon indicating copy to clipboard operation
convert-pytorch-onnx-tensorrt copied to clipboard

While Converting from PyTorch to ONNX Type Error: Type 'tensor(bool)' of input parameter (/Equal_27_output_0) of operator (CumSum) in node (/CumSum) is invalid.

Open naveenkumarkr723 opened this issue 2 years ago • 1 comments

hi @qbxlvnf11 i wanted to convert parseq model to onnx , like below

resnet18 = torch.hub.load('baudm/parseq', 'parseq', pretrained=True) this was added in main function

resnet18.eval().to('cuda') this also added in the code otherwise its throwing input type error

providers = [ ('CUDAExecutionProvider', { 'device_id': 0, 'arena_extend_strategy': 'kNextPowerOfTwo', 'gpu_mem_limit': 2 * 1024 * 1024 * 1024, 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': True, }), 'CPUExecutionProvider', ]

sess = rt.InferenceSession(args.output_path, providers=providers)

after adding above things in code we run our code like

python convert_pytorch_to_onnx.py --dynamic_axes True --sample_image_path roi_1.jpg --output_path onnx_output_explicit.onnx --opset_version 14

But we are Getting some kind of error like

2023-01-02 11:51:23.380117232 [W:onnxruntime:, graph.cc:1231 Graph] Initializer onnx::MatMul_5865 appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py. Traceback (most recent call last): File "convert_pytorch_to_onnx.py", line 176, in sess = rt.InferenceSession(args.output_path, providers=providers) File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 384, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from onnx_output_explicit.onnx failed:This is an invalid model. Type Error: Type 'tensor(bool)' of input parameter (/Equal_27_output_0) of operator (CumSum) in node (/CumSum) is invalid.

dont know what is the exact problem it is , can u please solve it and give us that code which will be helpful thanks and regards , i hope u can solve this as much as earlier

naveenkumarkr723 avatar Jan 03 '23 03:01 naveenkumarkr723