mmdeploy icon indicating copy to clipboard operation
mmdeploy copied to clipboard

在mmdeploy转换onnx推理时[Bug] Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(int64) and tensor(float) in node (/Where_11).

Open amxl56 opened this issue 11 months ago • 2 comments

Checklist

  • [X] I have searched related issues but cannot get the expected help.
  • [X] 2. I have read the FAQ documentation but cannot get the expected help.
  • [X] 3. The bug has not been fixed in the latest version.

Describe the bug

UserWarning: The exported ONNX model failed ONNX shape inference. The model will not be executable by the ONNX Runtime. If this is unintended and you believe there is a bug, please report an issue at https://github.com/pytorch/pytorch/issues. Error reported by strict ONNX shape inference: [ShapeInferenceError] (op_type:Where, node name: /Where_11): Y has inconsistent type tensor(float) (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\jit\serialization\export.cpp:1421.) _C._check_onnx_proto(proto) 2. Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(int64) and tensor(float) in node (/Where_11). 3. [2025-01-12 16:49:55.372] [mmdeploy] [info] [model.cpp:35] [DirectoryModel] Load model: "mmdeploy_model/faster-rcnn/" [2025-01-12 16:49:55.614] [mmdeploy] [error] [ort_net.cpp:205] unhandled exception when creating ORTNet: Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(int64) and tensor(float) in node (/Where_11). [2025-01-12 16:49:55.614] [mmdeploy] [error] [net_module.cpp:54] Failed to create Net backend: onnxruntime, config: { "context": { "device": "", "model": "", "stream": "" }, "input": [ "prep_output" ], "input_map": { "img": "input" }, "is_batched": true, "module": "Net", "name": "fasterrcnn", "output": [ "infer_output" ], "output_map": {}, "type": "Task" } [2025-01-12 16:49:55.614] [mmdeploy] [error] [task.cpp:99] error parsing config: { "context": { ... ], "output_map": {}, "type": "Task" } 4. sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from mmdeploy_model/faster-rcnn\end2end.onnx failed:Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(int64) and tensor(float) in node (/Where_11). 01/12 16:51:10 - mmengine - ERROR - mmdeploy/tools/deploy.py - create_process - 82 - visualize onnxruntime model failed.

[ONNXRuntimeError] : 1 : FAIL : Load model from mmdeploy_model/faster-rcnn/end2end.onnx failed:Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(int64) and tensor(float) in node (/Where_11)

Reproduction

from mmdeploy.apis import inference_model

model_cfg = 'mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py' deploy_cfg = 'mmdeploy/configs/mmdet/detection/detection_onnxruntime_dynamic.py' backend_files = ['mmdeploy_model/faster-rcnn/end2end.onnx'] img = 'mmdetection/demo/demo.jpg' device = 'cpu' result = inference_model(model_cfg, deploy_cfg, backend_files, img, device)

!python mmdeploy/demo/python/object_detection.py cpu mmdeploy_model/faster-rcnn/ mmdetection/demo/demo.jpg

!python mmdeploy/tools/deploy.py
mmdeploy/configs/mmdet/detection/detection_onnxruntime_dynamic.py
mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py
mmdetection/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
mmdetection/demo/demo.jpg
--work-dir mmdeploy_model/faster-rcnn
--device cpu
--dump-info

Environment

mmcv==2.1.0
torch==2.1.0
onnxruntime==1.15.1

Error traceback

No response

amxl56 avatar Jan 12 '25 08:01 amxl56

I also have this problem and it hasn't been solved yet

dutian312 avatar Jul 10 '25 09:07 dutian312

该问题已被解决,测试发现是torch版本问题,升级到2.4,目前测试过2.1/2.2皆有此问题

dutian312 avatar Jul 11 '25 02:07 dutian312