demuxin

Results 13 issues of demuxin

## Description I customized TensorRT's Col2Im plugin, recompiled the source code of TensorRT8.5, and generated a new nvinfer_plugin library. ## Environment **TensorRT Version**: 9.2.0.5 **NVIDIA GPU**: GeForce GTX 1080 Ti...

ONNX
triaged

## Description I customized TensorRT's Col2Im plugin, recompiled the source code of TensorRT8.5, and generated a new nvinfer_plugin library. This is the LayerNormalization node information in the model, ![image](https://github.com/NVIDIA/TensorRT/assets/19351259/ef132e1d-23ef-4ff0-8ec5-33fc7ee9aa4b) So...

triaged

@WongKinYiu Hi, I find the results of original onnx and end2end onnx are different, is this normal? How to solve this issue?

## ❓ Question I used Torch-TensorRT to compile the torchscript model in C++. When compiling or loading torchtrt model, it displays many warnings. ``` WARNING: [Torch-TensorRT] - Detected this engine...

question

## ❓ Question I have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT...

question

## ❓ Question When I compile the SwinTransformer model using Torch-TensorRT, an error appears: ``` terminate called after throwing an instance of 'c10::Error' what(): 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615,...

question

## Bug Description When I use the code below to compile the touchscript model, a segmentation fault occurs. I compiled the Torch-TensorRT source code with debug mode, and ran the...

bug

## Description I tried to convert my onnx model into trt engine, and I got this error: ``` [06/11/2024-07:46:17] [E] [TRT] ModelImporter.cpp:882: While parsing node number 23810 [Add -> "boxes"]:...

## Description I am convert [this model](https://github.com/jozhang97/DETA) from onnx to tensorrt engine. when I had a test using `polygraphy run --trt model.onnx` command, and it appeared a error: ``` [E]...

triaged

## Description My model has a split operator that is expected to divide each sub-tensor according to the dimensions specified. As depicted in this setting: ![image](https://github.com/NVIDIA/TensorRT/assets/19351259/5230b6be-d967-4975-8faa-bd3abe1d5589) This is the split...

triaged