tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

Much Higher inference time using converted onnx model

Open purvang3 opened this issue 2 years ago • 1 comments

Ask a Question

Question

I am converting tensorflow 1.14 trained tensorflow object detection model to onnx and running inference.
I am using tensorflow/tensorflow:1.14.0-gpu-py3 docker image. 
using onnxruntime-gpu==1.1 or latest, in both I am getting 5x higher inference time than frozen_graph.pb.

Actuallly onnxruntime using cpu instead of Gpu. is this because of wrong onnxruntime-gpu version?

if yes, then which onnx version should I install? I already tested with 1.1 and latest release.

Further information

  • Is this issue related to a specific model?
    Model name: tensorflow1 object detection model

Model opset: 13

Notes

purvang3 avatar Feb 07 '23 04:02 purvang3

Could you please open an issue in onnxruntime repo for a suggestion about inference performance?

fatcat-z avatar Mar 17 '23 04:03 fatcat-z