TensorRT
TensorRT copied to clipboard
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
## Description Hello! I have a pipeline, that gets TRT engine from a torch checkpoint, which works fine for Cuda11.4 && TensorRT-7.2.3.4-1.cuda11.1. When I tried to upgrade GPU libs (and...
Hello, After compiling tensorRT 8.4.2 from source, and try to convert my onnx model to TRT model on my jetson nano I got the following error : > Internal Error...
## Description Getting this error ''' ``` Collecting tensorrt Using cached tensorrt-8.6.1.post1.tar.gz (18 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: tensorrt Building wheel for tensorrt (setup.py)...
## Description I have trained a tensorflow model and converted it into onnx modesl and trt models, but the results of these two models can totally not match. I use...
## Description Hi! https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-713/support-matrix/index.html lists aarch64 as a supported platform for 7.1.3. Yet, there seems to be no relevant tar file for it in the nvidia developer zone https://github.com/NVIDIA/TensorRT/tree/release/7.1. Your...
## Description I am trying to quantize a Pytorch model to INT8 to run with tensrorrt. I have read these [docs](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/index.html), and am still unclear on whether I have to...
When I run the model with TensorRT 8.6.2 and CUDA 12.2 in Jetson Device, there are a lot of errors: (Ubuntu 22.04, Jack Pack 6.0 R36.2 https://developer.nvidia.com/embedded/jetpack#collapseAllJetson ) [E] [TRT]...
## Description I tried to use the C++ API to load the attached ONNX model but it fails with a segmentation fault (core dumped). Note: possibly related to https://github.com/NVIDIA/TensorRT/issues/3630, this...
NVRTC Compilation failure Aborted (core dumped) how to solve this problem?(attachment is log of engine) [Uploading log_3070.json…]() /usr/bin/trtexec \ --onnx=petr.onnx \ --saveEngine=petr.trt \ --fp16 \ --inputIOFormats=fp32:chw \ --outputIOFormats=fp16:chw \ --verbose...
## Description I tried to convert grouding.onnx to tensorrt on GPU, but it fails with the error below torch2onnx commend: ``` caption = "the running dog ." #". ".join(input_text) input_ids...