tensorflow-onnx
tensorflow-onnx copied to clipboard
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Hello this is a feature request for an --mp "Model Precision" parameter that will influence the output model precision. for example : ``` python3 -m tf2onnx.convert --saved-model tensorflowModel/ --opset 14...
**Dubugging advice** Converting TF model to onnx on s390x succeeds, but the resulting onnx file includes a large number 268632064 in Reshape operator. python3 -m tf2onnx.convert --opset 15 --fold_const --saved-model...
**Describe the bug** Not sure if it is expected behavior, I was converting a densenet-like model trained by Keras and I notice that it takes tf2onnx over 18 minutes to...
**Describe the bug** Converting a TensorFlow model that accumulates data via a ```tf.TensorArray``` in a nested for-loop *fails*. **Urgency** None **System information** - OS Platform and Distribution: Ubuntu 20.04.4 LTS...
## Support for Conv1D NWC kernel (keras default) I did not find a way to avoid a transposition layer before and after the Conv1D operator and support the NWC kernel...
Describe the bug A simple model with a TextVectorization layer cannot be converted to ONNX System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04 Tf2onnx version: 1.9.3...
As seen in the last few posts of #1434. The fix made in #1435 doesn't work for Conv3D. Node group numbers are incorrect because the 5d data formats aren't accounted...
**Describe the bug** receiving message "ValueError: Opset 10 is required for quantization. Consider using the --dequantize flag or --opset 10." After trying to `python -m tf2onnx.convert --opset 9 --dequantize --tflite...
Is there a way to convert force the conversion of tf.Densse layer to MatMul + Add instead of GEMM? I'm converting onnx to DLC and GEMM is causing problems.
**Describe the bug** When processing a graph with unspecified batch size, and with a Matmul followed by an Add node, the gemm_rewriter will fail to rewrite the Matmul+Add nodes to...