tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX

Results 269 tensorflow-onnx issues
Sort by recently updated
recently updated
newest added

# Ask a Question ### Question Hi! Thanks for the work. I was following tutorial [here](https://github.com/onnx/tensorflow-onnx/blob/main/tutorials/mobiledet-tflite.ipynb) to successfully convert the fp32-tflite model into onnx with opset 13. When i run...

question

**Describe the bug** The generated graph has consecutive dequant and quant node, with same scale and zero point. These are not needed. The weights can have one dequant to make...

bug

**Describe the bug** Currently TF BFloat16 data type maps to Float16: https://github.com/onnx/tensorflow-onnx/blob/main/tf2onnx/tf_utils.py#L31 and this is seen in onnx graphs generated by tf2onnx, where e.g. cast to bfloat16 and cast to...

bug

I attempt to convert mars-small128.pb [model_link](https://drive.google.com/drive/folders/1m2ebLHB2JThZC8vWGDYEKGsevLssSkjo) to saved_model.pb I use `python3 -m tf2onnx.convert --graphdef mars-small128.pb --output saved_model.onnx --inputs "images:0" --outputs "features:0"`. Before that I investigate mars-small128.pb using `tf.compat.v1`, and the...

question

Hello everyone, I've train ssd_mobilenet_v2_320x320 pretrained model with custom dataset via tensorflow2.x object detection api and exported to saved model with using exporter_main_v2.py. Everything works fine when inference it using...

question

**Describe the bug** When writing a TF/keras model trained w/ with F64, tf2onnx warns about a lack of float64 support for GEMM by the runtime: ``` onnx_model, _ = tf2onnx.convert.from_keras(model,...

bug
pending on user response

**Describe the bug** This error while converting a model: ``` 2024-08-24 23:42:22,711 - INFO - Using tensorflow=2.17.0, onnx=1.16.2, tf2onnx=1.16.1/f85e88 2024-08-24 23:42:22,711 - INFO - Using opset INFO: Created TensorFlow Lite...

bug

Einsum is already supported by most frameworks. Such as [trt](https://github.com/NVIDIA/TensorRT/issues/1617#issuecomment-992266722) ,[OpenVino](https://docs.openvino.ai/2022.3/openvino_docs_ops_matrix_Einsum_7.html), [onnx](https://onnx.ai/onnx/operators/onnx__Einsum.html). Therefore, it is no longer necessary to implement Einsum by combining operators, which may cause BF16 precision to...

bug

**Describe the bug** After updating to tensorflow 2.17 (from 2.15) we noticed that `tf2onnx.convert.from_keras` does not wort anymore. See examples down below. **Urgency** Not particularly urgent for us (but maybe...

bug

# New Operator PartitionedCall ### Describe the operator This operator is needed for any ConvNeXt Keras implementation conversion to ONNX ### Do you know this operator be constructed using existing...

unsupported ops