tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX

Results 252 tensorflow-onnx issues
Sort by recently updated
recently updated
newest added

the examples are awesome but file have to named correctly

pending on user response

I am trying to convert yolov2-voc generated by DarkFlow in to ONNX, and running in to this error "ValueError: tensorflow op ExtractImagePatches is not supported" I am using tensorflow=1.9.0, onnx=1.4.1,...

contribution welcome
unsupported ops

These two lines look odd to me... Why hard code the input dtype to uint8? Such type is **unsupported in TensorRT**. How can I avoid such problems when I convert...

enhancement

**Describe the bug** Hi, I was converting CenterNet([CenterNet HourGlass104 512x512](centernet_hg104_512x512_coco17_tpu-8) from Tensorflow Object Detection API(https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md) with the Back-to-back optimizer turned off to disable the batchnorm fusion into conv layers following...

keras

**Error Message ** 2022-02-22 09:30:14.294491544 [E:onnxruntime:, sequential_executor.cc:346 Execute] Non-zero status code returned while running Add node. Name:'Add_1363' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:505 void onnxruntime::BroadcastIterator::Append(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest...

pending on user response

**Describe the bug** When attempting to convert a TF SSD Mobilenet FPN model that has been trained via transfer learning from SavedModel to Onnx using tf2onnx the optimizer outputs an...

pending on user response

**Describe the bug** When `tf.unstack` or `tf.split` is used on an input, dynamic batch dimension is lost. This was found while trying to export [NodLabs's DLRM](https://github.com/NodLabs/tensorflow-dlrm). An example script is...

pending on user response

The original model is in TFLite, with NHWC format. When export to ONNX, because there is no NCHW format in ONNX, exporter spend quite big efforts to convert it to...

enhancement

While shape calculations for the input correctly distinguished between channels_first and channels_last, shape calculations for the inputs of the final Slice and Pad nodes always assumed channels_last format.

pending on user response