onnx-tensorflow icon indicating copy to clipboard operation
onnx-tensorflow copied to clipboard

BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.

Open rtm-1dyakonov opened this issue 3 years ago • 22 comments

Right now i am trying to port silero_vad model from Onnx format to TensorFlow with onnx_tf.

However, after .export_graph next error occurs:

BackendIsNotSupposedToImplementIt: in user code:

File "c:\users\rtm51\downloads\onnx-tensorflow\onnx_tf\backend_tf_module.py", line 99, in __call__  *
    output_ops = self.backend._onnx_node_to_tensorflow_op(onnx_node,
File "c:\users\rtm51\downloads\onnx-tensorflow\onnx_tf\backend.py", line 347, in _onnx_node_to_tensorflow_op  *
    return handler.handle(node, tensor_dict=tensor_dict, strict=strict)
File "c:\users\rtm51\downloads\onnx-tensorflow\onnx_tf\handlers\handler.py", line 61, in handle  *
    raise BackendIsNotSupposedToImplementIt("{} version {} is not implemented.".format(node.op_type, cls.SINCE_VERSION))

BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.

You can get the model here

rtm-1dyakonov avatar Dec 29 '21 14:12 rtm-1dyakonov

Hello,

I am also getting the same error while converting a yolov5's onnx model to Tensorflow format. BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.

Can you please suggest how can I fix this error?

Thank you

anshudaur avatar Jan 03 '22 13:01 anshudaur

Unfortunately the spec change for Unsqueeze version 13 has not been implemented. Contribution is certainly welcome! In the meantime, you could possibly export the model into onnx opset 12 and the conversion should work.

chinhuang007 avatar Jan 07 '22 00:01 chinhuang007

Do you mean that firstly I have to port my model to Onnx format via onnx opset 12 and then use current version of onnx-tensorflow?

rtm-1dyakonov avatar Jan 10 '22 11:01 rtm-1dyakonov

Do you mean that firstly I have to port my model to Onnx format via onnx opset 12 and then use current version of onnx-tensorflow?

Yes.

anshudaur avatar Jan 10 '22 11:01 anshudaur

I can't export my pytorch model expect opset_version=14. And when I try to export_graph I get the same error "BackendIsNotSupposedToImplementIt: Squeeze version 13 is not implemented."

Currently, I'm on onnx-tf 1.9.0, onnx 1.10.2, tensorflow 2.7.0.

So, I won't be able to run via onnx opset 12. Is there any workaround possible for me?

Aayush2007 avatar Jan 21 '22 15:01 Aayush2007

I also have the same issue with yolov3.onnx model (from https://github.com/ultralytics/yolov3)

Valdiolus avatar Jan 27 '22 14:01 Valdiolus

I have the same issue with fairseq transformer translate model.

EuphoriaYan avatar Mar 08 '22 10:03 EuphoriaYan

When I try to export pytorch translation model in the format of onnx(opset_version=12), an error is raised: RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Support for this operator was added in version 14, try exporting with this version. So I have to export the translation model with opset_version=14... However, when I try to convert the onnx model into tensorflow pb checkpoint, it will occur the same error as the arthor.

EuphoriaYan avatar Mar 09 '22 01:03 EuphoriaYan

I experienced the same issue. The model I'm trying to convert to TF must use opset 13 when converted from PT -> ONNX. Thanks so much to @krishnanNuance for working on it!! See here adding squeeze/unsqueeze for opset 13 👍

ghost avatar Apr 09 '22 11:04 ghost

I think you can close this issue @chinhuang007 😊

ghost avatar Apr 11 '22 14:04 ghost

Hello, I am facing the same issue. Originally I had an onnx model with opset 14, then I downgraded it to both 12 and 13 but I keep getting the same error. What do I have to do to add squeeze/unsqueeze to the opsets?

MarcoEsposito890 avatar Feb 17 '23 08:02 MarcoEsposito890

Hello, I am facing the same issue. Originally I had an onnx model with opset 14, then I downgraded it to both 12 and 13 but I keep getting the same error. What do I have to do to add squeeze/unsqueeze to the opsets?

you need to add changes from this branch -https://github.com/onnx/onnx-tensorflow/pull/1022

krishnanNuance avatar Feb 17 '23 08:02 krishnanNuance

When I try to export pytorch translation model in the format of onnx(opset_version=12), an error is raised: RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Support for this operator was added in version 14, try exporting with this version. So I have to export the translation model with opset_version=14... However, when I try to convert the onnx model into tensorflow pb checkpoint, it will occur the same error as the arthor.

I was in a similar situation.I had to use opset-16 to do the pt->onnx conversion because the node "grid_sample" is only supported in opset-16 . How did you solve the problem?

leeqiaogithub avatar Feb 21 '23 03:02 leeqiaogithub

I can't export my pytorch model expect opset_version=14. And when I try to export_graph I get the same error "BackendIsNotSupposedToImplementIt: Squeeze version 13 is not implemented."

Currently, I'm on onnx-tf 1.9.0, onnx 1.10.2, tensorflow 2.7.0.

So, I won't be able to run via onnx opset 12. Is there any workaround possible for me?

I was in a similar situation.I had to use opset-16 to do the pt->onnx conversion because the node "grid_sample" is only supported in opset-16 . How did you solve the problem?

leeqiaogithub avatar Feb 21 '23 03:02 leeqiaogithub

Currently, I'm on onnx-tf 1.10.0, onnx 1.13.0, tensorflow 2.6.0.

leeqiaogithub avatar Feb 21 '23 03:02 leeqiaogithub

What we should do to solve this issue? https://stackoverflow.com/questions/75969364/backendisnotsupposedtoimplementit-error-converting-onnx-to-tensorflow "BackendIsNotSupposedToImplementIt Error: Converting ONNX to Tensorflow"

I could see some comment in below link that its fixed from onnx developer side. Im running in google colab, but this issue still persist. I cant build onnx_tf from source code in colab i think, instead of 'pip install onnx_tf'. https://github.com/onnx/onnx-tensorflow/pull/1022

johnkennyy avatar Apr 14 '23 01:04 johnkennyy

I experienced the same issue. The model I'm trying to convert to TF must use opset 13 when converted from PT -> ONNX. Thanks so much to @krishnanNuance for working on it!! See here adding squeeze/unsqueeze for opset 13 👍

Unfortunately the issue persists, even if I'm already using release v1.10.0, where the the pull request #1022 should have been merged.

jianyuzzz avatar Jan 10 '24 14:01 jianyuzzz

Same here, using onnx-tf 1.10.0 but still sees this

summerisc avatar Feb 26 '24 09:02 summerisc

我处于类似的情况。我必须使用 opset-16 进行 pt->onnx 转换,因为节点“grid_sample”仅在 opset-16 中受支持。然后, BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.就报错了 请问如何解决

Magnificent-01 avatar Mar 28 '24 08:03 Magnificent-01

still facing the same issue using onnx-tf 1.10.0. I tried with opset 13,14,16 but it does not solve the problem. I could not try with opset 12 or lower because I get this error: RuntimeError: D:\a\onnx\onnx\onnx\onnx/version_converter/BaseConverter.h:70: adapter_lookup: Assertion false failed: No Adapter To Version $12 for Relu What can I do ?

xedrion avatar May 02 '24 11:05 xedrion