TensorRT
                                
                                 TensorRT copied to clipboard
                                
                                    TensorRT copied to clipboard
                            
                            
                            
                        NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Adds an automatic labelling bot using LLM whenever an issue is raised. See an example issue at https://github.com/poweiw/TensorRT/issues/4 and its run log at https://github.com/poweiw/TensorRT/actions/runs/15433437593 The `goggles_action` is still private. Please...
Versions: DRIVE Orin (Automotive) DRIVE OS 6.0.9.0 TensorRT 8.6.12 Expected DLA SW: 3.14.2 can provide more version info if this is unexpected. I could use clarification on what "Native" means...
## Description I tried to run my model on GPU, but it fails with the problem below: It takes longer time when doing the inference than Trt8.6.2. Here is the...
I find that the Conv+bn can't fused with relu and Conv+bn Kernel's output type always is FP32, very slow, slower than FP16 and int8 ``` import torch import torchvision import...
Hello! My goal is to run trt engine for Hi-Fi GAN on H100 as efficiently as possible with small or none drop in quality. I have decided that the simplest...
## Description I want to measure the performance of the model, so I want to know the number of parameters and FLOPs. Is there any tool that can calculate the...
Let say I define a global tensor `t1`, where shape is `[1, 12, 2000, 112]`, and the actual input tensor `t2` shape is `[1, 12, 40, 112]`. I have copied...
I want use tensorrt to accelerate https://github.com/MooreThreads/Moore-AnimateAnyone, but I don't know how to support referenceNet.Can you give me some advice?
parser = trt.OnnxParser(network, trt_logger) parse_valid = parser.parse_from_file(onnx_model) When I used the above two lines of code to parse the onnx model path with Chinese, I kept reporting error parsing failures....
## Description When I try to convert SuperPoint model from onnx to tensorrt engine using trtexec I faced ``` [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation...