TensorRT
TensorRT copied to clipboard
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
## Description I try to test tensorrt example sample_mnist in WSL2, and I get a wrong output. This is the error log: &&&& RUNNING TensorRT.sample_mnist [TensorRT v8203] # ./sample_mnist [01/30/2022-16:45:10]...
## Description yolov5s base pytorch-quantization reference https://github.com/maggiez0138/yolov5_quant_sample onnx->fp16 3ms qat->onnx->int8 4ms why? please tell me,thanks. [onnx file download](https://drive.google.com/file/d/1Q1u81E0yLVrwHgazTN-l38ZyEFL78ggz/view) ## Environment **TensorRT Version**:8.2 **NVIDIA GPU**: geforce 3060ti **NVIDIA Driver Version**: 510...
Refer to [https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization/examples](url) 1. calibrate_quant_resnet18 2. finetune_quant_resnet18 3. export to onnx But the ONNX inference results are different from Pytorch inference results: Mismatched elements: 980 / 1000 (98%) Max absolute...
I follow your sample here that support convert detectron2 model to TensorRT : [https://github.com/NVIDIA/TensorRT/tree/main/samples/python/detectron2](url) I share my code on colab here [https://colab.research.google.com/drive/1L7OcEkyetcqZeDB-TmiH0M85TOQ7ge6t#scrollTo=1zwdEdx8Em0K](url) I stuck at convert from caffe to onnx...
## Description ## Environment **TensorRT Version**: 8405 **NVIDIA GPU**: **NVIDIA Driver Version**: **CUDA Version**: **CUDNN Version**: **Operating System**: **Python Version (if applicable)**: **Tensorflow Version (if applicable)**: **PyTorch Version (if applicable)**:...
## Description ## Environment **TensorRT Version**: **NVIDIA GPU**: **NVIDIA Driver Version**: **CUDA Version**: **CUDNN Version**: **Operating System**: **Python Version (if applicable)**: **Tensorflow Version (if applicable)**: **PyTorch Version (if applicable)**: **Baremetal...
## Description ## Environment **TensorRT Version 8.4**: **NVIDIA GPU 1080ti**: **NVIDIA Driver Version 470**: **CUDA Version 11.0**: **CUDNN Version**: **Operating System**: **Python Version (if applicable)**: **Tensorflow Version (if applicable)**: **PyTorch...
#assertionD:\MMbushu\mmdeploy\csrc\mmdeploy\backend_ops\tensorrt\batched_nms\trt_batched_nms.cpp,103
## Description #assertionD:\MMbushu\mmdeploy\csrc\mmdeploy\backend_ops\tensorrt\batched_nms\trt_batched_nms.cpp,103 ## Environment **TensorRT Version**: 8.2.3.0 **NVIDIA GPU**: 3060 **NVIDIA Driver Version**: **CUDA Version**: 11.1 **CUDNN Version**: 8.2.1 **Operating System**: **Python Version (if applicable)**: 3.8 **Tensorflow Version (if...
OS: Ubuntu 18.04 JetPack Version: 4.6.1 Cuda: 10.2 Cudnn: 8.2.1 Tensorrt: 8.0.1 followed `/usr/local/bin/cmake .. -DGPU_ARCHS=53 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out make nvinfer_plugin -j$(nproc)` provides an output of `/home/kennethg01/TensorRT/plugin/efficientNMSPlugin/efficientNMSInference.cu:18:10: fatal error: cub/cub.cuh:...
Using the tutorial jupyter notebook https://github.com/NVIDIA/TensorRT/blob/main/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb with only 2 modifications: 1. In block 1: `resnet50 = models.resnet50(pretrained=True).eval()` -> `resnet50 = vit_b_16 = timm.create_model('vit_base_patch16_224', pretrained=True).eval()` 2. In block 6: `resnet50_gpu =...