TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Results 628 TensorRT issues
Sort by recently updated
recently updated
newest added

Adds an automatic labelling bot using LLM whenever an issue is raised. See an example issue at https://github.com/poweiw/TensorRT/issues/4 and its run log at https://github.com/poweiw/TensorRT/actions/runs/15433437593 The `goggles_action` is still private. Please...

Versions: DRIVE Orin (Automotive) DRIVE OS 6.0.9.0 TensorRT 8.6.12 Expected DLA SW: 3.14.2 can provide more version info if this is unexpected. I could use clarification on what "Native" means...

Module:Embedded
triaged

## Description I tried to run my model on GPU, but it fails with the problem below: It takes longer time when doing the inference than Trt8.6.2. Here is the...

Module:Performance
triaged
internal-bug-tracked
Investigating

I find that the Conv+bn can't fused with relu and Conv+bn Kernel's output type always is FP32, very slow, slower than FP16 and int8 ``` import torch import torchvision import...

Module:Performance
triaged

Hello! My goal is to run trt engine for Hi-Fi GAN on H100 as efficiently as possible with small or none drop in quality. I have decided that the simplest...

triaged
Module:Quantization

## Description I want to measure the performance of the model, so I want to know the number of parameters and FLOPs. Is there any tool that can calculate the...

triaged

Let say I define a global tensor `t1`, where shape is `[1, 12, 2000, 112]`, and the actual input tensor `t2` shape is `[1, 12, 40, 112]`. I have copied...

triaged

I want use tensorrt to accelerate https://github.com/MooreThreads/Moore-AnimateAnyone, but I don't know how to support referenceNet.Can you give me some advice?

triaged

parser = trt.OnnxParser(network, trt_logger) parse_valid = parser.parse_from_file(onnx_model) When I used the above two lines of code to parse the onnx model path with Chinese, I kept reporting error parsing failures....

triaged

## Description When I try to convert SuperPoint model from onnx to tensorrt engine using trtexec I faced ``` [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation...

triaged