Zero Zeng
Zero Zeng
Also make sure you feed the some shape of input to trt and pytorch, also disable dynamic shape is much fair to trt.
> But still batch of 1 is 2.5 times quicker than regular pytorch, but a batch of 8 is not even 2 times faster than pytorch Usually mean GPU is...
increase batch size or use multi thread?
Could you please try validate the output with `polygraphy run model.onnx --trt --fp16 --onnxrt`. See https://github.com/NVIDIA/TensorRT/tree/main/tools/Polygraphy
I saw similar issue when open my cisco anyconnect client. so I search the error and I found this. I have a WAR that use `WEBKIT_DISABLE_DMABUF_RENDERER=1 /opt/cisco/anyconnect/bin/vpnui` or `export WEBKIT_DISABLE_DMABUF_RENDERER=1`...
I guess your onnx model only support batch size it. what is the model input shape? what error do you see?
Could you please provide a reproduce for us? Maybe something is broken. cc @pranavm-nvidia
Use [IShapeLayer](https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_shape_layer.html)?
TRT's workflow is: 1. Create the network with dynamic axis. 2. Specify dynamics shapes profile 3. Build the engine.
Just replace the TRT with the above package, there is also a readme on how to install it, please also uninstall the pre-installed one first.