TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Results 628 TensorRT issues
Sort by recently updated
recently updated
newest added

Hi NVIDIA team, This PR adds an early check in trtexec to ensure the --saveEngine path is writable before starting ONNX parsing. It avoids unnecessary compute when the path is...

I want to use pytorch-quantization to perform classification quantization of Deepstream7, which can be done normally in deepstream6. The process is to use torch-tensorrt==1.4.0 and pytorch-quantization==2.1.3, then export to jit...

## Description When running trtexec with the ```--saveEngine``` flag pointing to an invalid or unwritable path, the tool proceeds to parse the ONNX model and build the engine. However, it...

Feature Request
Module:Samples
triaged

Hitting "AttributeError: module 'dash.development' has no attribute 'component_loader'" when trying to run the [tutorial](https://github.com/NVIDIA/TensorRT/tree/main/tools/experimental/trt-engine-explorer#3-install-trex-in-development-mode) notebook after running `python3 -m pip install -e .[notebook]` in a clean conda environment. From [this...

Hi! I run yolo11 model inference for 1000 times in Tesla T4, but I found the time cost was very unstable. From the cached records, I found most of the...

triaged
Module:Runtime

``` TRT: 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime...

Module:Documentation
triaged
internal-bug-tracked

There appears to be STFT op in ONNX: https://onnx.ai/onnx/operators/onnx__STFT.html Is it supported? It appears that STFT export from PyTorch to ONNX will soon be finally fixed: - https://github.com/pytorch/pytorch/issues/147052

Feature Request
triaged

pytorch_quantization is supporting 4bit, ONNX is supporting 4bit, but torch.onnx.export is not support 4bit. How to make 4bit pytorch_quantization .pt model export to .engine model?

triaged

## Description I successfuly exported and converted Detectron2 Mask RCNN R50-FPN ONNX into TensorRT and built an engine but it doesn't detect or very very rarely detects anything, even if...

triaged
Module:Accuracy

I first post the issue in https://github.com/NVIDIA/TensorRT-Model-Optimizer/issues/159 I quantize a pytorch bert model using TensorRT-Model-Optimizer before quantization, I export this model to tensorrt and there is only one layer ![Image](https://github.com/user-attachments/assets/55fcce34-f32e-4c2d-a7e6-71551a3876a9)...

triaged
Module:Quantization