TensorRT
TensorRT copied to clipboard
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
## Bug Description I can't install TensorRT using pip install on Ubuntu 20.04 in a virtual environment. ## To Reproduce Steps to reproduce the behavior: 1. Install tensorrt, etc. https://pytorch.org/TensorRT/getting_started/installation.html#installation...
Signed-off-by: Dheeraj Peri # Description Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required...
## Bug Description Hi all, I noticed a strange behavior when trying to convert my custom model to INT8. Whenever I try to convert it to INT8 using Post Training...
# Description Moves master to test only nightly in line with new branching strategy. Fixes # (issue) ## Type of change Please delete options that are not relevant and/or add...
Add a "no_conversion" option to torch-tensorrt which when enabled will replace the standard conversion and engine insertion with an embedded function call for each convertible segment. This allows inspection of...
## Bug Description It appears that modules with multiple outputs no longer compile when using dynamic input shapes in v1.2.0. The following example works in **v1.1.1** but fails in **v1.2.0**...
## Bug Description EfficientNet example notebook does not compile to FP16 ## To Reproduce Steps to reproduce the behavior: Just open the EfficientNet notebook and try to run all. It...
Add converter support for aten::maximum/minimum. Refactor element_wise ops to reduce repeated code. Fixes # (issue) ## Type of change Please delete options that are not relevant and/or add your own....
🐛 [Bug] TensorRT getting stuck without any debug information on the current status of the conversion
## Bug Description When trying to convert a torch model to tensorrt, the process becomes stuck without showing any kind of debugging information on what is going on. CPU shows...
## Bug Description When using the latest code to test the bert model after qat quantization, the following error occurs and the model cannot be run.  Error corresponds to...