onnxconverter-common icon indicating copy to clipboard operation
onnxconverter-common copied to clipboard

Common utilities for ONNX converters

Results 59 onnxconverter-common issues
Sort by recently updated
recently updated
newest added

fixes https://github.com/microsoft/onnxconverter-common/issues/265 `onnx` itself specifics `>=3.20.2` - https://github.com/onnx/onnx/blob/main/requirements.txt#L2C17-L2C17 `onnxruntime` has no restriction - https://github.com/microsoft/onnxruntime/blob/main/requirements.txt.in#L5

We are releasing ONNX 1.16.0. A release branch is created (https://github.com/onnx/onnx/tree/rel-1.16.0). The planned release date is March 25, 2024. Release candidates are also available from TestPyPI: `pip install -i https://test.pypi.org/simple/...

Hi, Polygraphy used `onnxmltools.utils.float16_converter.convert_float_to_float16` to convert model to Fp16. However, I noticed that it generated some orphan cast nodes. I was wondering if anyone has encountered a similar issue or...

Can we use the convert_float_to_float16 function in float16 module to convert large onnx models like owlv2-L/14 ? I tried to convert them but during the onnxruntime_inference I have some issue...

There is a [model](https://drive.google.com/file/d/1Rl2IjmulxcLnbdv1c_qh8gdIfvfhLOOy/view?usp=drive_link) from tensorflow2onnx, the FP32 model can run successfully. Then use `float16_converter.convert_float_to_float16(onnx_model, keep_io_types=True)` convert to [FP16 model](https://drive.google.com/file/d/1G3yny-nVjT4sXrmiNMfkeC9RrVgyu_R-/view?usp=drive_link). But the FP16 model can't create session, error: `onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError]...

Some user reported that extra Cast nodes after running auto mixed precision conversion. See related issue here: https://github.com/microsoft/onnxruntime/issues/19437 ORT 1.17 has changed the behavior of Cast node removal, and no...

while installing tf2onnx==1.16.0 onnxconverter-common 1.14.0 requires protobuf==3.20.2, but you have protobuf 3.20.3 which is incompatible. but tensorflow-intel 2.15.0 requires protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,=3.20.3, but you have protobuf 3.20.2 which is incompatible. Using Python...