onnxconverter-common icon indicating copy to clipboard operation
onnxconverter-common copied to clipboard

Common utilities for ONNX converters

Results 59 onnxconverter-common issues
Sort by recently updated
recently updated
newest added

I found a typo in [onnxconverter_common/perfstats.py](https://github.com/microsoft/onnxconverter-common/blob/v1.9.0/onnxconverter_common/perfstats.py). - right : `onnxconverter_common` - wrong : `onnxconvert_common` So, I fixed it.

adding the input_list for keeping io tpyes ,becase of only part of inputs need to keep types, such as 'img'.

due to the StrictVersion will be deprecated and it cannot recognize version such as "1.12.0rc5", so changed to use verlib.NormalizedVersion

Some issues observed when converting model to float16: - When converting model with `keep_io_types` set to `True`, `onnx.checker` will complain: ``` onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted,...

There should be a warning thrown when FP32 values outside of FP16 representative range are clamped when converted to FP16 to notify the user of potential unwanted behavior. Currently the...

The float16.py conversion has a blacklist option , but it does not seem to blacklist correctly and still tries to run Resize at float16, leading to errors.

Hi, for some reason the checksum for [source tarball of v1.9.0](https://github.com/microsoft/onnxconverter-common/archive/refs/tags/v1.9.0.tar.gz) changed recently (i.e. between December 13 and 15). Was it republished? Thanks

**Describe the bug** I tried to use mixed precision on model inception_v2.onnx and vgg19.onnx on GPU machine. At first, I use convert_float_to_float16_model_path with keep_io_types=False, but the inference became even slower....