lidar_camera_calibration icon indicating copy to clipboard operation
lidar_camera_calibration copied to clipboard

Is it Possible/How to Use MatLab Scripts to Show Transformations of Aligned Points Using Graphs

Open Bsting118 opened this issue 4 years ago • 27 comments

I was wondering if there is a way to use the X Y Z translations and the RPY angles from (maybe) the extrinsics.npz to show a graph of aligned points with average transformations. I was thinking that this could be done with MatLab but I am not sure how exactly. I am trying to do this so I can verify results and show accuracy of the calibration.

I am also curious as to what the Rotation Matrix is. Are those numbers in the Rotation Matrix the average rotations or the final rotations? If the rotation matrix consists of the average rotations, then how can I get the final rotations(I believe I need the initial rotation from lidar and camera * the rotation average)? Also, are the X Y Z translations from T.npy the average translations?

Is there also a way to get the RMSE, or the root mean squared error, between the 3D points viewed by the camera and lidar after applying the transformations?

Bsting118 avatar Oct 23 '20 15:10 Bsting118

Additionally, is there a way to display, show, or calculate the possible errors of calibration like the RMSE?

Bsting118 avatar Oct 23 '20 20:10 Bsting118

You can compute the RMSE error at this point. Here's a rough snippet to get started (untested):

points2D_reproj = cv2.projectPoints(points3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs)[0].squeeze(1)
assert(points2D_reproj.shape == points2D.shape)
error = points2D_reproj - points2D
rmse = np.sqrt(np.mean(error[:, 0] ** 2 + error[:, 1] ** 2))

I am not sure what you mean by "average transformations" or "average rotations". Rotation matrix is the matrix corresponding to the Euler angles (they represent the same rotation in different forms). The translation vector and Euler angles is the final transform and the only transform. Check the README to visualize the projection and update this line with X Y Z Y P R values.

heethesh avatar Oct 23 '20 21:10 heethesh

Ah, ok. The average transformations and rotations I was just wondering if it was final or average; clearly it's final. By the way, when updating the line with X Y Z Y P R values on the display launch script, do we also have to modify or add something for the Rotation matrix or R.npy? For the X Y Z Y P R, I only used the T.npy for the X Y Z and the euler.npy for the Y P R (RPY but reversed). Please confirm if I have to use R.npy or Rotation matrix at all...only seems I need to use T.npy and euler.npy data/values.

Bsting118 avatar Oct 28 '20 13:10 Bsting118

Yes you are right you only need the Euler angles. The rotation matrix is a more consistent way of representing rotations and does not have ambiguities such as that associated with Euler and is only maintained for debugging.

heethesh avatar Oct 29 '20 22:10 heethesh

Which values in the lidar_camera_calibration data are the RGB 2D values/output? Are the pcl_corners.npy and img_corners.npy the RGB 2D values? If not, where can I find the RGB 2D outputs from calibration?

Bsting118 avatar Nov 03 '20 17:11 Bsting118

RGB values themselves are not stored anywhere, the image 2D points are stored in img_corners.npy and the corresponding 3D points are in pcl_corners.npy

heethesh avatar Nov 04 '20 03:11 heethesh

Is there anyway to calculate the RGB values manually from the points stored?

Bsting118 avatar Nov 04 '20 03:11 Bsting118

Just store the RGB values here along with the image coordinates

heethesh avatar Nov 04 '20 03:11 heethesh

Do I need just an assignment operator and variable or do I need a compound operator for that segment of code?

Bsting118 avatar Nov 04 '20 03:11 Bsting118

Maybe these OpenCV tutorials and reading up on working with Numpy arrays will help you get started. You can just use a python list to store the RGB values accessed from the image.

heethesh avatar Nov 04 '20 03:11 heethesh

Okay, thank you for the help!

Bsting118 avatar Nov 04 '20 03:11 Bsting118

You can compute the RMSE error at this point. Here's a rough snippet to get started (untested):


points2D_reproj = cv2.projectPoints(points3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs)[0].squeeze(1)

assert(points2D_reproj.shape == points2D.shape)

error = points2D_reproj - points2D

rmse = np.sqrt(np.mean(error[:, 0] ** 2 + error[:, 1] ** 2))

I am not sure what you mean by "average transformations" or "average rotations". Rotation matrix is the matrix corresponding to the Euler angles (they represent the same rotation in different forms). The translation vector and Euler angles is the final transform and the only transform. Check the README to visualize the projection and update this line with X Y Z Y P R values.

Hi, I used this code to check rmse but the result was not acceptable for example, the first result was 9 and the second was 27.

yulan0215 avatar Feb 08 '21 14:02 yulan0215

Hi, I used this code to check rmse but the result was not acceptable for example, the first result was 9 and the second was 27.

RMSE computed over how many points? Is this with or without the LM refinement step (OpenCV > 4.1)?

heethesh avatar Feb 08 '21 19:02 heethesh

Hi, I used this code to check rmse but the result was not acceptable for example, the first result was 9 and the second was 27.

RMSE computed over how many points? Is this with or without the LM refinement step (OpenCV > 4.1)? I used OpenCV4.2. And I computed 6 pairs of point cloud and pixels firstly then I got the result 9... Besides, the code I added was like this: image

Can you give me some ideas about it? Thank you very much and I am looking forward to your reply!

yulan0215 avatar Feb 08 '21 20:02 yulan0215

9 pixel RMSE is reasonable for 6 correspondences, I would recommend to use > 30-40 correspondences. Note that this evaluation computes reprojection error over the outlier points from PnPRansac as well which might increase the RMSE even though the transform estimates are better. Try to use only the inlier points and also run LM optimization only on the inliers from PnPRansac. The fourth return argument of solvePnPRansac is the inlier mask, I will probably update the script to use only inliers for LM refinement step. You can go ahead and try this out.

heethesh avatar Feb 09 '21 01:02 heethesh

9 pixel RMSE is reasonable for 6 correspondences, I would recommend to use > 30-40 correspondences. Note that this evaluation computes reprojection error over the outlier points from PnPRansac as well which might increase the RMSE even though the transform estimates are better. Try to use only the inlier points and also run LM optimization only on the inliers from PnPRansac. The fourth return argument of solvePnPRansac is the inlier mask, I will probably update the script to use only inliers for LM refinement step. You can go ahead and try this out.

Thank you for your reply and I am looking forward to your update. One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

yulan0215 avatar Feb 09 '21 09:02 yulan0215

Thank you for your reply and I am looking forward to your update.

Try this patch from the branch inliers-reprojection-error and let me know if this works, I'll merge it in.

One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

Your OpenCV Python library probably already dynamically links to the Eigen libraries on your system, without which you would not be able to use that function.

heethesh avatar Feb 09 '21 13:02 heethesh

Thank you for your reply and I am looking forward to your update.

Try this patch from the branch inliers-reprojection-error and let me know if this works, I'll merge it in.

One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

Your OpenCV Python library probably already dynamically links to the Eigen libraries on your system, without which you would not be able to use that function.

Hi, the code you provided worked... But rmse was very high: image The terminal of calibration is shown above, when I selected 7 pairs of pixel and point cloud in one frame, it gave me warning: "Initial estimation unsuccessful, skipping refinement." Besides, only the Reprojection error before LM refinement was shown in the terminal. Thank you very much and I am looking forward to your reply!

yulan0215 avatar Feb 09 '21 15:02 yulan0215

Thank you for your reply and I am looking forward to your update.

Try this patch from the branch inliers-reprojection-error and let me know if this works, I'll merge it in.

One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

Your OpenCV Python library probably already dynamically links to the Eigen libraries on your system, without which you would not be able to use that function.

Hi, the code you provided worked... But rmse was very high: image The terminal of calibration is shown above, when I selected 7 pairs of pixel and point cloud in one frame, it gave me warning: "Initial estimation unsuccessful, skipping refinement." Besides, only the Reprojection error before LM refinement was shown in the terminal. Thank you very much and I am looking forward to your reply!

I am sorry that I have one more question that in the function: solvePnPRANSAC, I do not know in this part, the algorithm UPnP or EPnP was used in the calibration, if you know the principle could you pls tell me? Thank you very much!

yulan0215 avatar Feb 09 '21 16:02 yulan0215

Please see the documentation for more details here. If the initial estimation is bad, this probably means you had poor or insufficient correspondences, again see the documentation why it might have failed. Yes, if LM refinement does't run, it won't compute the RMSE there. This method from OpenCV also returns the reprojection error.

heethesh avatar Feb 09 '21 19:02 heethesh

Please see the documentation for more details here. If the initial estimation is bad, this probably means you had poor or insufficient correspondences, again see the documentation why it might have failed. Yes, if LM refinement does't run, it won't compute the RMSE there. This method from OpenCV also returns the reprojection error.

I will check it later, thx!

yulan0215 avatar Feb 09 '21 20:02 yulan0215

Please see the documentation for more details here. If the initial estimation is bad, this probably means you had poor or insufficient correspondences, again see the documentation why it might have failed. Yes, if LM refinement does't run, it won't compute the RMSE there. This method from OpenCV also returns the reprojection error.

Hi. Thank you for your suggestion and I solved this problem. However, when I did calibration, another error occurred: image I followed the code you provided yesterday but this problem occurred.

yulan0215 avatar Feb 10 '21 21:02 yulan0215

can you print error array? what is its shape (error.shape)?

heethesh avatar Feb 10 '21 21:02 heethesh

can you print error array? what is its shape (error.shape)?

image The error.shape and error was shown above, thx!

yulan0215 avatar Feb 12 '21 15:02 yulan0215

can you print error array? what is its shape (error.shape)?

I am sorry that do you have any literatures related to this lidar camera spatial calibration? Thx!

yulan0215 avatar Feb 12 '21 15:02 yulan0215

The error.shape and error was shown above, thx!

Your error array has an extra dimension. I did have a .squeeze(1) applied on axis 1. Seems like you previously were able to compute the RMSE without any issues. Is your code/implementation different from master now?

heethesh avatar Feb 16 '21 22:02 heethesh

The error.shape and error was shown above, thx!

Your error array has an extra dimension. I did have a .squeeze(1) applied on axis 1. Seems like you previously were able to compute the RMSE without any issues. Is your code/implementation different from master now?

No, I used the code you updated and I just modified the fov of point cloud. I have another question that do you know how to change the size of point cloud when I reprojected point cloud into image via display_camera_lidar_calibration.launch? The point cloud was so small. Thx!

yulan0215 avatar Feb 21 '21 10:02 yulan0215