TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

Results 599 TensorRT issues
Sort by recently updated
recently updated
newest added
trafficstars

Signed-off-by: Cheng Hang # Description According to the current design, every module is by default compiled to TensorRT execpt those modules included in torch_executed_modules. However, in some cases, for example,...

component: lowering
component: core
WIP
component: api [Python]
No Activity
cla signed

**Is your feature request related to a problem? Please describe.** Notebooks right now have a few issues including: - Notebooks must be run AOT to avoid triggering execution during documentation...

documentation
feature request

## Bug Description I'm using torch_tensorrt to try and quantize a pretrained ResNet50 model (roughly following the steps [here](https://nvidia.github.io/Torch-TensorRT/_notebooks/vgg-qat.html)), but I am getting a segmentation fault. I've tried running the...

bug
component: quantization

## ❓ Question I am getting a linking error when using `torch_tensorrt::ptq::make_int8_calibrator`. I am using the Windows build based on CMake, so I'm not sure if it's a problem with...

question
component: quantization
channel: windows

## Bug Description I tried to convert a scripted module to torch-tensorrt but it includes None as a const and it seems like that is not supported. ## To Reproduce...

bug
component: lowering

In order to fully support RNNs, we want to be able to make recurrent subgraphs to TRT. To do so we must expand the capability of the compiler to recognize...

feature request
component: conversion
component: core
component: converters

## Bug Description L1 loss is too large between torch f32 and compiled torch_tensorrt model (base) root@VM-121-213-centos:/apdcephfs/share_1041553/kyikiwang/BasketDetect# python resnet_trt.py Using cache found in /root/.cache/torch/hub/pytorch_vision_v0.10.0 WARNING: [Torch-TensorRT] - Dilation not used...

bug

## ❓ Question I'm trying to optimize hugging face's BERT Base uncased model using Torch-TensorRT, the code works after disabling full compilation (`require_full_compilation=False`), and the avg latency is ~10ms on...

question
performance

## ❓ Question I'm trying to run a pretrained resnet50 model from torch.torchvision.models. enabled_precisions is set to torch.half. Each time I load the same resnet50 torchscript, using the same input(which...

question

## To Reproduce ```python # https://github.com/bilibili/ailab/blob/main/Real-CUGAN/VapourSynth/upcunet_v3_vs.py import torch import torch_tensorrt from torch import nn as nn from torch.nn import functional as F import os,sys import numpy as np from tqdm...

bug