HopeJW
HopeJW
Yeah, this configuration is pointpillars based centerpoint model. It will be supported by TensorRT.
No, you can't use the export-scn.py. This [code ](https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars/blob/main/tool/export_onnx.py) may be helpful to you.
fps is computed by CUDA-BEVFusion on ORIN.
https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/blob/a8461f0a024477dbcf8746a3ea8a0f5e3aa14540/CUDA-BEVFusion/src/main.cpp#L258C6-L258C6
I'll finish it as soon as possible, thanks.
Sorry. I have pushed the CUDA 12 version of the libspconv. [link](https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/tree/master/libraries/3DSparseConvolution/libspconv_cuda12)
**however the results are off:** These issues may caused by layernorm layer in TensorRT. I have pushed the layernorm plugin [here](https://github.com/NVIDIA-AI-IOT/Lidar_AI_Solution/blob/master/CUDA-BEVFusion/src/plugins/custom_layernorm.cu).
Sorry, I will update the CenterPoint implementation.
1. Just to correct you, we only achieve 25fps, not 28fps. 2. Inputs include an image tensor(1x6x3x256x704) and a lidar points tensor. 3. We test on Drive ORIN 64G only....
Yes, we only use a parallel method to process 6 images. Because they can be packed into a batch and fed into TRTEngine.