tf_jetson_nano
tf_jetson_nano copied to clipboard
Not converted to TensorRT model OR Issue while Optimize with TensorRT
import tensorflow.contrib.tensorrt as trt
trt_graph = trt.create_inference_graph(
input_graph_def=frozen_graph,
outputs=output_names,
max_batch_size=1,
max_workspace_size_bytes=1 << 25,
precision_mode='FP16',
minimum_segment_size=50
)
while running above code in Google Colab, it gives,
INFO:tensorflow:Running against TensorRT version 0.0.0
And optimized TensorRT model has same size as frozen model. It seems frozen model not converted to TensorRT model.
Can someone clarify, why it is showing TensorRT version 0.0.0?