Results 11 issues of lixiaolx

**Is your feature request related to a problem? Please describe.** When using the -conv-fprop of cutlass to perform the conv operation, it is found that in the entire kernel, the...

feature request
inactive-30d

**Describe the bug** timm0.6.7 version, using timm's resnet50, first convert the model to onnx, and then use tenosrrt's model PTQ quantization function, the quantized model is verified on the val...

bug

## Bug Description Perform int8 quantization on resnet50 in the reference Test-demo ( https://github.com/pytorch/TensorRT/tree/master/tests/py/ptq ), and compare the inference result with the original FP32, the accuracy is quite different run...

bug
component: quantization

## Description After I used onnx-tensorrt to complete the int8 quantization of the resnet18 model, I found that the performance was the same as that of fp16 (batchsize=64). I would...

## Bug Description When using the latest code to test the bert model after qat quantization, the following error occurs and the model cannot be run. ![image](https://user-images.githubusercontent.com/17673134/189883821-ef584b8f-f934-4d8d-93a4-6a23cf9faced.png) Error corresponds to...

bug
component: quantization

## ❓ Question When using torch-trt to test Bert's qat quantization ( https://zenodo.org/record/4792496#.YxGrdRNBy3J ) model, I encountered many FakeTensorQuantFunction nodes in the pass, and at the same time triggered many...

question
component: quantization

## Description When using resnet50 of the timm model library, when PTQint8 is quantized, the accuracy becomes worse. In the same environment, the accuracy before and after quantization on torchvision...

triaged

### Description ```shell When using FasterTransformer to perform the TRT test of the VIT model, by grabbing the nsys information, it is found that the used kernel and hardware sm...

bug

I would like to ask, how to use the ssd-mobilenet model provided by MLcommons on pytorch and convert it into the corresponding jit model, can you provide a demo or...

Can onnx(from tf) and torch(from hugging) match the corresponding model under the int8 model? The operators of the last few layers of the onnx model in the current int8 mode...