TensorRT
                                
                                 TensorRT copied to clipboard
                                
                                    TensorRT copied to clipboard
                            
                            
                            
                        NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
## Description I am trying to build TensorRT from source on Google colab, and i've been running into some errors when I try to cmake .. ## Environment **TensorRT Version**:...
## Description When converting an ONNX model to TensorRT with int8 calibration, we observe the error ``` [TensorRT] VERBOSE: Calculating Maxima [TensorRT] INFO: Starting Calibration. [TensorRT] INFO: Calibrated batch 0...
**THE ISSUES SECTION IS ONLY FOR FILING BUGS. PLEASE ASK YOUR QUESTION ON THE DISCUSSION TAB.** My env: > Docker image: nvcr.io/nvidia/tensorflow:22.05-tf2-py3, TRT: 8.2.5.1, CUDA: 11.7 > tf 2.8 >...
## Description When I use infer.py on a sample image that was given in the TensorRT samples it gives me totally wrong predictions. I used the infer.py that was suggested...
## Description After I used onnx-tensorrt to complete the int8 quantization of the resnet18 model, I found that the performance was the same as that of fp16 (batchsize=64). I would...
## Environment **TensorRT Version**: 8.4.0.6 or 8.4.1.5 **NVIDIA GPU**: T600 or RTX 3060 **NVIDIA Driver Version**: 511.65 or 512.96 **CUDA Version**: 11.6 or 11.3 **CUDNN Version**: 8.x **Operating System**: win10...
## Description We got different accuracy for identical onnx models. Models have identical graphs and parameters. Nodes of both models are topologically sorted but order of nodes in files is...
## Description [08/10/2022-18:18:24] [TRT] [I] Global timing cache in use. Profiling results in this builder pass will be stored. [08/10/2022-18:19:17] [TRT] [W] Skipping tactic 0x0000000000000000 due to Myelin error: Formal...
Hello, I am using tensorrt8.4, but why the acceleration time is so different on WIN and LINUX
Failed when convert ResNet50 QAT onnx model to trt using trtexec. Log: [08/12/2022-03:53:50] [V] [TRT] =============== Computing costs for [08/12/2022-03:53:50] [V] [TRT] *************** Autotuning format combination: Int8(57600,900,30,1) -> Int8(57600,900,30,1) ***************...