tf_trt_models icon indicating copy to clipboard operation
tf_trt_models copied to clipboard

very slow inference result on Jetson TX2

Open PythonImageDeveloper opened this issue 6 years ago • 3 comments

Hi everyone, I converted the ssdlite_mobilenetv2 and ssd_mobilenetv2 and ssd_resnet50 to TensorRT with Tensorflow API, and this API generated the .pb file. I used Tensorflow 1.13 and Jetpack 4.2, but the final inference time is not good. I achieved 2.5 FPS, this isn't real-time, and the loading model time is about 10 min, why?

PythonImageDeveloper avatar May 13 '19 10:05 PythonImageDeveloper

Hi, Check my thread on https://devtalk.nvidia.com/default/topic/1046492/tensorrt/extremely-long-time-to-load-trt-optimized-frozen-tf-graphs/1

Upgrading the protobuf might help. Good luck.

filipski avatar May 13 '19 10:05 filipski

Hi PythonImageDeveloper,

Could you clarify which model you're experiencing 2.5 FPS? Are you running the pre-processing scripts contained in this repository, or using create_inference_engine directly?

Best, John

jaybdub avatar May 28 '19 21:05 jaybdub

Hi @jaybdub I'm using the create_inference_engine.

PythonImageDeveloper avatar May 29 '19 07:05 PythonImageDeveloper