tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

Different results between TensorFlow model and ONNX model.

Open chielingyueh opened this issue 2 years ago • 2 comments

Hi,

I converted a TensorFlow model to an ONNX model:

spec = (tf.TensorSpec((None, 256), tf.int32, name="input_ids"),)
tf2onnx.convert.from_keras(model, output_path='model_biomarker.onnx', input_signature=spec)

However, when I make an inference on the ONNX model, the output is different from what I get from the TensorFlow model.

Could anyone help me why there is a difference between the TensorFlow model and ONNX model output?

Thanks!

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): macOS Montery 12.3
  • TensorFlow Version: 2.4.1
  • Python version: 3.9.7
  • ONNX version (if applicable, e.g. 1.11*): 1.13.0
  • ONNXRuntime version (if applicable, e.g. 1.11*): 1.13.1

chielingyueh avatar Dec 16 '22 09:12 chielingyueh

Hi @chielingyueh , It could be caused by some issues in tf2onnx. You can try to compare each layer's output to find out the suspicious layer. And please attach the tensorflow model or show how the tensorflow model is created so that others can help with that.

cosineFish avatar Mar 25 '23 04:03 cosineFish

I have the same issue.

OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): macOS 13.2.1 TensorFlow Version: 2.11.0 Python version: 3.7 ONNX version (if applicable, e.g. 1.11*): 1.14.0

file_path = "{}/onnx_model/{}_{}.onnx".format(self.output_path, "model", epoch)
spec = (tf.TensorSpec((1, unified_config.list_size, len(unified_config.feature_cols)), tf.float32, name="input"),)
tf2onnx.convert.from_keras(self.model, input_signature=spec, output_path=file_path)

Above is how I export the model to ONNX.

jdxyw avatar Apr 24 '23 05:04 jdxyw