Results 128 comments of HopeJW

Sorry, the root cause is the layernorm bug in TensorRT when building the head.bbox.onnx. I have updated the new solution in [here](https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/blob/master/CUDA-BEVFusion/src/plugins/custom_layernorm.cu) and [here](https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/blob/3e5a6add43dbf6ccf974717bda539687381d7860/CUDA-BEVFusion/qat/export-transfuser.py#L291).

I have updated the new libspconv.so which will be decoupled from libprotobuf. Please look at it [here](https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/tree/master/libraries/3DSparseConvolution).

1. Download the nuscenes-mini dataset. ``` for ... each sample: core->update(sample.matrices) results = core->forward(sample) visualize(results, result/samplen.jpg) ``` 2. Convert result/*.jpg to gif using Python code.

How did you free the images pointer.

Could you add my WeChat(woshixiwanga) to co-debug this? I know that some implementations in `camera-geometry.cu` may be hard to understand and need to be careful. I'll update this part later.

https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/tree/master/libraries/3DSparseConvolution/workspace/centerpoint

Here just the first 10 numbers are printed. This is because the [tensor.print](https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/blob/ae4929b194de662318eb665f95fef91a591c6365/CUDA-BEVFusion/src/common/tensor.hpp#L121C10-L121C10) function determines it. Actually, the shape is still 1x6x4x4. You can change the print parameter to determine...

I think you can calculate the tops by yourself. There are no tools to accomplish this.

Actually this corresponds to resize(0.48) + translate(-32, -176)