tensorrt icon indicating copy to clipboard operation
tensorrt copied to clipboard

TensorFlow/TensorRT integration

Results 104 tensorrt issues
Sort by recently updated
recently updated
newest added

Thanks for your work. It's really helpful for me. I just want to know whether it's possible to run the demo (TF-TRT C++ Image Recognition Demo) on a Jetson Orin...

I am trying to convert a tensorflow saved_model to tensorrt engine using the below python script. ``` from tensorflow.python.compiler.tensorrt import trt_convert as trt # Conversion Parameters conversion_params = trt.TrtConversionParams(precision_mode=trt.TrtPrecisionMode.FP16) input_saved_model_dir...

Hi there, we met this error during our inferencing but currently still don't know how to fix this. As for inferencing, the original model looks fine but the tf-trt optimized...

I want to use tf-trt to optimize a tf2 model, and then serve with triton. But fail to serve the optimized tf-trt model. Following is the process: 1. following this...

**environment:**docker nvcr.io/nvidia/tensorflow:22.06-tf2-py3 Reason: TF-TRT Warning: Engine creation for PartitionedCall/PartitionedCall/TRTEngineOp_000_000 failed. The native segment will be used instead. Reason: NOT_FOUND: No converter for op _FusedBatchNormEx

tensorflow=2.1.0 tensorRT=6.1.0 I saved `tf.keras.applications.resnet50` as saved_model and convert with tf-trt with `converter.build()` as below. ``` def input_fn(): for _ in range(16): input1 = np.random.normal(size=(64, 224, 224, 3)).astype(np.float32) yield [input1]...

Greetings, I am currently using tf-trt and I want to measure the perfomance of my models (Latency, Throughput). The tensorrt c++ API has the functionality of cuda synchronize via the...

After optimizing the model with either FP32 or FP16 I don't get any speed improvements. The optimization is done on tensorflow/tensorflow:2.10.0-gpu docker image. The model uses tensorflow-text and tf-models-official libraries...

I am working with the Tensorflow 2.0 project that uses multiple models for inference. Some of those models were optimized using TF-TRT. I tried both regular offline conversion and offline...

This PR extends the current C++ image classification example with a saved model path. The two workflows are thus: - Keras saved model ->TFTRT Python API -> frozen graph ->...