TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

Results 599 TensorRT issues
Sort by recently updated
recently updated
newest added

## TL;DR torch.export has serde API and now supports custom object (in our case `torch.classes.tensorrt.Engine`) serialization. ## Goal(s) We should be able to serialize/deserialize the output of torch-trt compilation. ##...

Story

## ❓ Question I converted the gfpgan model (https://github.com/TencentARC/GFPGAN) with torch_tensorrt, and I found torch_tensorrt is twice as fast as torch in 3070. But in one a10 server, torch_tensorrt and...

question

- `fx_converters.rst` should be `dynamo_converters.rst` - The file needs to be added to [`index.rst`](https://github.com/pytorch/TensorRT/blob/4cffd6e4e85bd27542fef2dca179e13aec79b5aa/docsrc/index.rst) to reflect on the webpage - Check formatting in generated `html`

No Activity

- Additionally, rename to `_ops_converters.py` - Rename `ops_evaluators.py` to `_ops_evaluators.py`

No Activity

I was trying to compile the huggingface Llama 2 model using the following code: ```python import os import torch import torch_tensorrt import torch.backends.cudnn as cudnn from transformers import AutoModelForCausalLM, AutoTokenizer...

question
No Activity

## ❓ Question The default log level of python is warning. Why import torch_tensorrt set log level to info automatically? How could I set log level back to warning? ```...

question
No Activity

## ❓ Question When compiling the latest version of Torch-TensorRT from `origin/main` (`2.2.0.dev0+76de80d0`) on Jetpack5.1 using the latest locally compiled PyTorch (`2.2.0a0+a683bc5`) (so that I can use the latest v2...

question
No Activity

# Context In TensorRT, there are certain rules and restrictions regarding tensor I/O which are not entirely in line with those in Torch. For instance, outputs from TRT engines cannot...

feature request