Results 128 comments of HopeJW

Could you provide your running environment details? like TensorRT Version, JetsonPackage Version, CUDA Version, and Reported inference latency.

`TensorRT-8.6, cuda-11.4 and cudnn8.6` may be the best choice for you.

1. if 8GB memory is qualified it, if not how many it need? -> That would be enough to infer. But it would help if you considered removing some of...

Sorry, currently `libspconv.so` does not work in environments smaller than sm_80.

> @hopef When I export-scn from a "non ptq model" and try to load it using `load_engine_from_onnx` in https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/blob/87fb0cc6fcf38d0cf998bf0cdcbd039e6732d928/CUDA-BEVFusion/src/bevfusion/lidar-scn.cpp#L38C1-L39C1 > > I get the error > > ```shell > [libprotobuf...

Hi sandeepnmenon, I can't see the bias of the SparseConvolution layer in your onnx. This may be the root cause.

First, the libspconv can support the SparseConvolution without bias. Second, the bias error is introduced by the onnx parser. You can handle it in code. Is the libspconv library tied...

The latest version(v1.1.0) can handle the bias error.

@sangjinpark97 Due to the interface update in version 1.1, a few code changes were required to adapt to the new version. You can take a look at the test code...