TensorRT
TensorRT copied to clipboard
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
## Bug Description If you use `torch_executed_ops` to run an op in PyTorch, it causes the runtime to fail to setup the engine. ## To Reproduce Steps to reproduce the...
# Description A page that describes all the debugging tools and processes that we use to understand Torch-TensorRT compilation Fixes # (issue) ## Type of change Please delete options that...
It seems all tests in `py/torch_tensorrt/fx/test` are not run in CI at all. It would be good to add them to CI.
# Description https://github.com/pytorch/TensorRT/issues/3501 Fixes # (issue) ## Type of change Please delete options that are not relevant and/or add your own. - Bug fix (non-breaking change which fixes an issue)...
We no longer need the distributed extra since we have the built in downloader. This should be removed. https://github.com/pytorch/TensorRT/blob/74df7126d61509076586f2a6252508fba7a08565/pyproject.toml#L104
## ❓ Question Will SAM2 be compatible with the Dynamo backend on JetPack 6.1/6.2? Are there any workarounds for the TensorRT version mismatch? ## What you have already tried Here...
## ❓ Question is there a way to manually annotate quantization parameters that will be respected throughout torch_tensorrt conversion (e.g. manually adding q/dq nodes, or specifying some tensor metadata) via...
## Bug Description I am trying to quantize already trained FP16 models to INT8 precision using torch_tensorrt and accelerate inference with TensorRT engines. However, during this process, I encountered several...
Bumps [transformers](https://github.com/huggingface/transformers) from 4.48.0 to 4.50.0. Release notes Sourced from transformers's releases. Release v4.50.0 New Model Additions Model-based releases Starting with version v4.49.0, we have been doing model-based releases, additionally...
when i use torch-tensorrt 2.4.0 to convert a quantized PT2 to trt, i got this error blow.  I wonder whether it will support in ther future? Or, i am...