Zero Zeng
Zero Zeng
Can you provide a reproduce sample here? also it would be good to try our latest release.
Could you please check the onnx accuracy first? e.g. with onnxruntime.
Or it can be quickly checked with polygraphy: `polygraphy run model.onnx --trt --onnxrt` to see if the accuracy is matched between onnxruntime and TRT. Sometimes it's caused by exporting.
Dropout is disabled during inference so I don't think you need it.
I don't know much about it, @pranavm-nvidia could you please kindly help :-)
Could you please try version `8.6.1.post1`?
Checking further internally.
Hi all, is it possible that you can provide a minimal reproduce steps so that I can reproduce and file internal bug.
> [02/24/2023-13:22:21] [E] [TRT] (Unnamed Layer* 187) [Convolution]: kernel weights has count 648 but 5760 was expected > [02/24/2023-13:22:21] [E] [TRT] (Unnamed Layer* 187) [Convolution]: count of 32640 weights in...
How about just export the model to onnx and use our onnx parser?