parseq icon indicating copy to clipboard operation
parseq copied to clipboard

ONNX inference issue

Open YenYunn opened this issue 1 year ago • 4 comments

when i run this code, i got the warning import torch import onnx

parseq = load_from_checkpoint('pretrained=parseq').eval() parseq.refine_iters = 0 parseq.decode_ar = False

image = torch.rand(1, 3, *parseq.hparams.img_size) parseq.to_onnx('parseq.onnx', image, do_constant_folding=True, opset_version=14)

onnx_model = onnx.load('parseq.onnx') onnx.checker.check_model(onnx_model, full_check=True) image

then, when I use this ONNX model for inference, I encounter the following error image

and this is my code image

YenYunn avatar Mar 11 '24 07:03 YenYunn

my version onnx==1.15.0 onnxruntime-gpu==1.17.1 torch==2.1.1+cu118 pytorch-lightning==2.1.0

YenYunn avatar Mar 12 '24 01:03 YenYunn

Exactly same issue with:

  • onnx==1.15.0
  • torch==2.2.1
  • pytorch-lightning==2.2.1

Stacktrace:

(.venv) [mantas@WS21 parseq]$ python onnx_runtime.py
Traceback (most recent call last):
  File "/home/mantas/Documents/Projects/parseq/onnx_runtime.py", line 4, in <module>
    session = ort.InferenceSession(model_path)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mantas/Documents/Projects/parseq/.venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/mantas/Documents/Projects/parseq/.venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 472, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from modified.onnx failed:Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(bool) and tensor(float) in node (/Where_23).

IceboxDev avatar Mar 19 '24 11:03 IceboxDev

@baudm Hello, author,

I am currently encountering some technical issues and would appreciate your assistance. Firstly, I would like to inquire about how to resolve the aforementioned problem. Secondly, after using an older version of the project and converting it to ONNX format, I noticed a significant discrepancy between the output results and the pre-trained model. Regarding this issue, could you provide some suggestions to help address this problem?

Thank you very much for taking the time to respond amidst your busy schedule.

YenYunn avatar Mar 22 '24 03:03 YenYunn