TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

Results 599 TensorRT issues
Sort by recently updated
recently updated
newest added

For the example below, how do I save the compiled model? backend = "torch_tensorrt" tp_model = torch.compile( tp_model, backend=backend, options={ "truncate_long_and_double": True, "enabled_precisions": {torch.float32, torch.float16}, "use_python_runtime": True, "min_block_size": 1, },...

question

Add support for the jetpack6.2 build. Currently jetpack 6.2 has: cuda: 12.6 python: 3.10 tensorrt: 10.3 DLFW: 24.08 (pytorch: 2.6.0) Jetson now distribute wheels on https://pypi.jetson-ai-lab.dev/ jetpack6.2 wheels: https://pypi.jetson-ai-lab.dev/jp6/cu126

documentation
component: build system
cla signed
needs-release-cherrypick

## ❓ Question How wo you export a triton kernel with model to a serialized engine that can be run in c++? ## What you have already tried Read through...

question

**Is your feature request related to a problem? Please describe.** We need faster turnaround time on PRs being validated in CI. Right now builds take 20mins + 40min - 1.5hrs...

feature request
infrastructure
Story: Infrastructure Upgrades

![image](https://github.com/user-attachments/assets/7669e736-170c-4da2-a3fb-dcdf282c2607) As you can see, `overview` word in the sidebar is bit misleading, it would help users if it was more descriptive.

documentation
cla signed

## Bug Description Presence of BUILD file in released wheel does break use of package in Bazel with rules-python (https://github.com/bazel-contrib/rules_python/issues/2780). ## To Reproduce Steps to reproduce the behavior: - See...

bug

The example script fx/quantized_resnet_test.py in the Torch-TensorRT repository fails to execute due to the use of a deprecated attribute EXPLICIT_PRECISION in the TensorRT Python API. This attribute is no longer...

bug

## Bug Description I trained a ssdlite320_320 mobilenetv3 large with Widerface datasets for face detection task. Here is what I received when running the `torch_tensorrt.compile()`: > (capstone) jetson@jetson-desktop:~/FaceRecognitionSystem/jetson/backend/python$ python test.py...

bug

Exporting a model that uses torch.Categorical().sample to sample from the logits. I currently have a (fixed length) loop within a torch.compile graph that includes sampling from the logits to choose...

feature request