TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Results 628 TensorRT issues
Sort by recently updated
recently updated
newest added

Hello, when I use c++ to infer the engine file, when the nvidia driver version is a new version, the inference time is not stable(16-200ms) ![image](https://user-images.githubusercontent.com/55009436/187899026-9243a7df-b528-4718-884f-c3eeaec9aa38.png) But inference time is...

triaged

Hi everyone, in the docs, at least in the [README.md](https://github.com/NVIDIA/TensorRT/blob/main/tools/experimental/trt-engine-explorer/README.md) file for Trex tool, there's no mention to graphviz as a requirement for generating the network graph svg file. Cheers,...

triaged

Hi, sorry for bothering. I am wondering if it is possible to let multiple models running on different cuda streams to share one single memory allocator?

triaged

## Description I am trying to follow this tutorial for my model: https://github.com/NVIDIA/TensorRT/blob/main/tools/experimental/trt-engine-explorer/notebooks/tutorial.ipynb The command: ``` display_df(plan.df) ``` Generates a huge error output ending with "RecursionError: maximum recursion depth exceeded...

triaged

TensorRT can not support sort operation from PyTorch?!

triaged

## Description stream is always blocking in my compute,another is not. i try in different computer, only my computer is not. ## code #include #include #include #include #include "cuda_runtime.h" #include...

wontfix

hello, when i use multiple stream parallel inference engine, the speed is doubled, this is my code, thank you very much for your help const int nStreams =3; std::cout

triaged

I meet the problem when I use trtexec to convert BERT onnx model to trt engine. The error info is as followed ModelImporter.cpp:124 In function parseGraph: [8] No Importer registered...

Component: ONNX
triaged
Release: 8.x

## Description Hi, [BatchedNMSPlugin](https://github.com/NVIDIA/TensorRT/tree/main/plugin/batchedNMSPlugin) plugin makes [ymin, xmin, ymax, xmax] as the default bounding box input format. https://github.com/NVIDIA/TensorRT/blob/87f3394404ff9f9ec92c906cd4c39b5562aea42e/plugin/batchedNMSPlugin/batchedNMSInference.cu#L113-L119 How can I use the bounding box input format of [xmin, ymin,...

triaged

## Description Hi, I tried to convert onnx to trt on Jetson NX (jetpack 4.6, trt 8.2.1, cuda 10.2) but got an Internal Error, I googled but cannot find any...

bug
Platform: Jetson
triaged