fast_gicp
fast_gicp copied to clipboard
NDT_CUDA get nan result.
My system info:
pcl 1.12.1 eigen 3.4.0 CUDA 11.6 cudnn 8.3.2 gcc 11.2.0
My cmake argument:
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=/opt/ros/noetic -DBUILD_VGICP_CUDA=ON
Hi I am currently using fast_gicp to do some prior locating work. However, I found that when using NDT_CUDA to register my pointclouds, I get nan results:
target:67804[pts] source:14093[pts]
--- pcl_gicp ---
single:485.159[msec]
0.999971 0.000591522 0.0076433 -0.085595
-0.00058857 1 -0.000388423 0.0937259
-0.00764352 0.000383913 0.999971 2.30524
0 0 0 1
--- pcl_ndt ---
single:61.4179[msec]
0.999999 0.00120411 4.85903e-05 -0.0260552
-0.00120416 0.999999 0.00107856 0.0946873
-4.72916e-05 -0.00107862 0.999999 -0.0187839
0 0 0 1
--- fgicp_st ---
single:348.614[msec]
0.999985 -0.00105683 0.00540941 -0.0871596
0.00105919 0.999999 -0.000433943 -0.0925524
-0.00540895 0.000439666 0.999985 2.18176
0 0 0 1
--- fgicp_mt ---
single:372.079[msec]
0.999985 -0.00105867 0.00540852 -0.087245
0.00106108 0.999999 -0.000443048 -0.0925688
-0.00540805 0.00044878 0.999985 2.1817
0 0 0 1
--- vgicp_st ---
single:448.121[msec]
0.99999 -0.0014508 0.00434923 -0.0833737
0.00145268 0.999999 -0.000428907 -0.0780743
-0.0043486 0.000435221 0.99999 2.12142
0 0 0 1
--- vgicp_mt ---
single:549.662[msec]
0.99999 -0.0014508 0.00434923 -0.0833737
0.00145268 0.999999 -0.000428907 -0.0780743
-0.0043486 0.000435221 0.99999 2.12142
0 0 0 1
--- ndt_cuda (P2D) ---
single:49.4802[msec]
nan nan nan nan
nan nan nan nan
nan nan nan nan
0 0 0 1
--- ndt_cuda (D2D) ---
single:51.1368[msec]
nan nan nan nan
nan nan nan nan
nan nan nan nan
0 0 0 1
--- vgicp_cuda (parallel_kdtree) ---
single:96.7676[msec]
0.99999 -0.00145048 0.00434936 -0.0834129
0.00145235 0.999999 -0.000428305 -0.078238
-0.00434873 0.000434617 0.99999 2.12144
0 0 0 1
--- vgicp_cuda (gpu_bruteforce) ---
single:157.454[msec]
0.999938 -0.00771049 0.00800458 0.0143744
0.00770667 0.99997 0.00050875 -3.1288
-0.00800827 -0.000447029 0.999968 2.43394
0 0 0 1
--- vgicp_cuda (gpu_rbf_kernel) ---
single:62.088[msec]
0.999989 -0.00153041 0.00450023 -0.119624
0.00153244 0.999999 -0.000448908 -0.0212703
-0.00449954 0.000455799 0.99999 2.12726
0 0 0 1
Registrations give me some non-nan results except NDT_CUDA. Using the pointclouds provided by the repo, the NDT_CUDA algorithm still produces non-nan result.
The two pointclouds used by me: pcd.zip
It seems the source is too apart from the target, and optimization is broken because there is completely no overlap. I'm not sure why other methods don't get corrupted, but I think any method can get broken in this setting.
It seems the source is too apart from the target, and optimization is broken because there is completely no overlap. I'm not sure why other methods don't get corrupted, but I think any method can get broken in this setting.
However on I found that align pointclouds provided by this repo also produces same result. I don't know whether this relates to PCL version, cause I found that current code in the repo is a bit incompatible with pcl v1.12.
Have you solved this problem? I suspect it is related to the versions of eigen and cuda
Have you solved this problem? I suspect it is related to the versions of eigen and cuda
Yes, I found that PCL 1.12 is not compatible with this library. I downgrade PCL to 1.10 and everything works well. I think there might be some bugs with PCL 1.12. Besides, I am using archlinux.
thanks, i am using orin(aarch64), cuda11.4,pcl1.10.0, it still get nan result(ndt cuda)...
thanks, i am using orin(aarch64), cuda11.4,pcl1.10.0, it still get nan result(ndt cuda)...
Hi @whuzs. Did you solved the problem? I am currently using PCL 1.11.1 and Jetpack r35.1, but no problem using this library.