onnxconverter-common
onnxconverter-common copied to clipboard
Common utilities for ONNX converters
Hi, I am using onnxmltools to convert a fp32 model to fp16. The original fp32 model was converted from pyTorch model with opset12. The fp32 model works well on input...
I have a .onnx file for a pre-trained model and I am trying to convert it from fp32 to fp16, I used these lines of code to do this thing:...
Hi, we want to publish [hummingbird ml](https://github.com/microsoft/hummingbird/issues/314#issuecomment-707812512) to anaconda. Since it depends on this package we need this package to also be on anaconda. Are you planning to publish this...
Currently it looks like only a wheel is published for this package to pypi. It would be useful to also publish the source in tar.gz format.
Steps: 1. Converted this pytorch model to ONNX FP32 (https://github.com/MCG-NKU/SOD100K/tree/master/CSNet) 2. Tried to convert the FP32 ONNX model to FP16 using the latest available onnxmltools through python pip, the output...
The [resize op] optionally supports specifying the target shape using the optinal `sizes` argument. The [current implementation] does not seem to support this argument. Please add this functionality. [resize op]:...
Tests fail with onnxruntime 1.12.1 built from source. Seems like InferenceSession needds to be instantiated with providers. ``` ====================================================================== ERROR: test_auto_mixed_precision (test_auto_mixed_precision.AutoFloat16Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "/build/source/tests/test_auto_mixed_precision.py",...
Hi. I try to convert superpoint float32 model to float16 using following code. ```Python import onnx from onnxconverter_common.float16 import convert_float_to_float16 if __name__=="__main__": WIDTH = 512 HEIGHT = 256 MAX_KEY =...
When converting a model from 32-bit to 16-bit, the attributes of the RandomUniform module do not change.