TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

Results 695 TensorRT issues
Sort by recently updated
recently updated
newest added

According to the doc: https://docs.pytorch.org/TensorRT/user_guide/mixed_precision.html We can convert model with this project where the param precision are explicitly said in the code. But when I train a model with torch...

Bug: FAILED models/test_models.py::test_resnet18_torch_exec_ops - AssertionError: ... TRT 10.13.3.9 Pytorch 2.10.0a0+b558c986e8 Error: ``` 2025-10-11T05:44:58.858100Z 01O =================================== FAILURES =================================== 2025-10-11T05:44:58.858110Z 01O _________________________ test_resnet18_torch_exec_ops _________________________ 2025-10-11T05:44:58.858130Z 01O 2025-10-11T05:44:58.858140Z 01O ir = 'dynamo' 2025-10-11T05:44:58.858150Z...

bug

``` FAILED conversion/test_index_put_aten.py::TestIndexPutConverter::test_bool_mask_test - AssertionError FAILED conversion/test_index_aten.py::TestIndexConstantConverter::test_index_constant_bool_mask_0_mask_index_three_dim - AssertionError FAILED conversion/test_index_aten.py::TestIndexConstantConverter::test_index_constant_bool_mask_1_mask_index_two_dim - AssertionError FAILED conversion/test_index_aten.py::TestIndexConstantConverter::test_index_constant_bool_mask_2_mask_index_multi_axis - AssertionError FAILED conversion/test_reshape_aten.py::TestReshapeConverter::test_reshape_0 - torch_tensorrt.dynamo.conversion._TRTInterpreter.UnsupportedOperatorException: Conversion of function torch._ops.aten.aten::view not currently supported! ```

bug

## Bug Description https://github.com/pytorch/TensorRT/actions/runs/18405313050/job/52459046057#step:13:1777 ## To Reproduce Steps to reproduce the behavior: 1. 2. 3. ## Expected behavior ## Environment > Build information about Torch-TensorRT can be found by turning...

bug

Torch distributed data parallel accelerate GPT2 example failing ``` cd examples/distributed_inference CUDA_VISIBLE_DEVICES=0 accelerate launch data_parallel_gpt2.py accelerate launch data_parallel_gpt2.py ``` torch 2.9.0.dev20250821+cu129 torch_tensorrt 2.10.0.dev0+0 accelerate 1.10.1

bug

Bug: FAILED llm/test_llm_models.py::test_llm_decoder_layer[FP16] - AssertionError FAILED llm/test_llm_models.py::test_llm_decoder_layer[BF16] - AssertionError TRT 10.13.3.9 Pytorch 2.10.0a0+b558c986e8 Passes on A100. Error: ``` 2025-10-11T04:31:34.722267Z 01O ERROR torch_tensorrt [TensorRT Conversion Context]:logging.py:22 Error Code: 9: Skipping tactic...

bug

## Environment Libtorch 2.5.0.dev (latest nightly) (built with CUDA 12.4) CUDA 12.4 TensorRT 10.1.0.27 PyTorch 2.4.0+cu124 Torch-TensorRT 2.4.0 Python 3.12.8 Windows 10 ## Code ``` import torch import torch_tensorrt model...

bug

# Description Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change....

component: tests
component: lowering
component: conversion
component: core
component: converters
component: build system
component: api [Python]
component: runtime
cla signed
component: dynamo

# Description Support pre-quantized HF models and post-training quantization (PTQ) option for [run_llm.py](https://github.com/pytorch/TensorRT/blob/main/tools/llm/run_llm.py) Fixes # (issue) ## Type of change - New feature (non-breaking change which adds functionality) # Checklist:...

component: tests
cla signed