tensorflow-onnx
tensorflow-onnx copied to clipboard
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Any idea wtf that means? i tried every path combination i could UPDATE: i used the wrong command my bad. But i have one question, is the opset set to...
**Describe the bug** Error when a model SavedModel in TF 2.10.0/2.11.0 that contain a TF Conv2D grouped is converted in ONNX with the API `tf2onnx.convert.from_keras()` ```bash 01-05-2023_15:50:43][WARNING][load.py:load()::177] No training configuration...
**Describe the bug** When converting tfjs model I get the following exception: ``` Traceback (most recent call last): File "/usr/local/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py",...
# Ask a Question ### Question I'm trying to convert a TF Random Forest model. Has this been done? ### Further information I'm getting the following error, what does it...
**Describe the bug** I installed tf2onnx.convert with (I also tried the stable version, same result): ``` pip uninstall tf2onnx pip3 install git+https://github.com/onnx/tensorflow-onnx ``` When I run it, I see: ```...
### Describe the issue I have this TF/keras model: ``` def build_model(): image_input = Input(shape=(None, 1), name='image', dtype='float32') img_width_input = Input(shape=(), name='width', dtype='int32') max_width = tf.reduce_max(img_width_input) mask = tf.sequence_mask(img_width_input, max_width)...
Closes: https://github.com/onnx/tensorflow-onnx/issues/436 This rewrite is based on this comment: https://github.com/onnx/tensorflow-onnx/issues/436#issuecomment-993313423 with changes to make it more general and translatable into `tf2onnx`. Equivalent TensorFlow function and automated test script (expand) ```python3...
When using tf2onnx to convert the tf model for training, we would like to disable some of the optimizations. For example, we don't want to fold constants since folding constants...
**Describe the bug** I tried to convert one specific TFLite model to ONNX. Everything went fine, there was no error, but the converter stopped before the generation of the output...
Remove ml version opset that is stored by default without using two flags(--extra_opset, --custom-ops) This additional ml version opset can cause problems with converting for model serving for various edge...