Results 29 comments of Prasanth Pulavarthi

the URLs should be working again for now.

(ONNX Runtime{[https://onnxruntime.ai] can inference models from PyTorch, TensorFlow, and other frameworks supporting ONNX. It's highly optimized to be fast and small and works across operating systems and hardware. It's used...

What do you want to do with the ONNX model? if you want to inference it, you should be using ONNX Runtime (https://github.com/microsoft/onnxruntime) ONNX Runtime supports all versions of ONNX...

ONNX 1.7 coming this week. @chinhuang007 to comment whether this is addressed

@nbcsm can it be composed in the converter?

Are you constrained to use TF to run your model?

Do you know what opset version your model is exported in? We've seen good performance on our models with https://github.com/Microsoft/onnxruntime. It supports opset 7 and higher. If your model is...

Let me know how it goes. If it continues to show poor performance, we likely need to take a look at whether the model was exported in an efficient way.

@Terizian the code shows version 7 but the output shows version 8. is the output from the right run? @houseroad I believe you worked on the version converter. can you...

@Terizian BTW, if you still want to pursue the onnx_tf route in parallel in the meantime, I suggest you file an issue in the onnx-tensorflow repo so the authors of...