Different Results when I do inference on onnx file using TRT engine vs inference using pth file
I am able to train and validate using a custom dataset (bunch of npy files and corresponding annotations). I am using pointpillars model. The results look accurate on the point cloud using the demp.py file (visualization using pth)
For my eval example 000000.npy, I get the following results using demo.py: pred_dicts[0]['pred_scores'] tensor([0.8475, 0.4954, 0.4725, 0.4603], device='cuda:0') pred_dicts[0]['pred_boxes'] tensor([[ 9.6669, 1.1732, 2.1426, 0.2856, 0.5018, 3.1349, 6.2874], [ 9.8581, -10.6740, 2.0632, 0.4447, 0.4504, 2.5857, 6.2749], [ 24.9824, -10.4977, 3.1227, 0.2673, 0.4696, 3.1857, 6.2983], [ 24.8274, 1.3483, 2.7095, 0.2326, 0.4953, 3.1487, 6.3119]], device='cuda:0')
However, when I do inference using the generated onnx file using the TRT engine, I get the following: 50.8117 -6.56379 1.9568 0.256044 0.501621 2.79036 6.28261 0 0.860124 48.4368 -14.8604 2.06768 0.442814 0.450666 2.58379 6.27543 0 0.499151 42.9857 -14.6852 3.1227 0.267265 0.469636 3.18565 6.29831 0 0.472531 45.4026 -6.38253 2.70948 0.232621 0.495276 3.14874 6.31186 0 0.460299
After I analyzed the numbers, I see the confidence numbers are almost the same. Even the box dimensions are same across both. Even the z dimensions match. However, I see a huge disparity in x and y coordinates. Can someone please help me out?
I have attached another comparison as well:

Hey, can someone please help me out with this? I am using npy files for TFRT inference instead of bin files. Not sure if that is causing any issue? When I compared the .npy data being loaded in python vs data loaded in the main.cpp, I see 32 additional bytes when the main.cpp file loads the same npy file. I accounted for this but that did not fix my issue, unfortunately!
@byte-deve , I would really appreciate your input here.
Can I ask about the inference of the model that you trained with custom dataset?
I also try to train the model, and when I inference it, it shows error.
Here is the code and the error:
./pointpillar ../data/custom_bin/ ../data/custom_bin/ --time
CUDA Runtime error nms_launch(bndbox_num_, bndbox_, param_.nms_thresh, h_mask_, _stream) # invalid configuration argument, code = cudaErrorInvalidConfiguration [ 9 ] in file /home/firo/Documents/workspace/CUDA-PointPillars/src/pointpillar/lidar-postprocess.cu:470 Aborted (core dumped)
I'm wondering if it is because of the input data format. I train the model using .npy point cloud file, and inference code needed .bin file. So I transfer the npy into bin file.
Can you please tell me how to solve this problem?I have seen that you got the inference result and very curious about how you made it.
Thank you very much!