librealsense
librealsense copied to clipboard
rs2_deproject_pixel_to_point error
-
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
- Consider checking out SDK examples.
- Have you looked in our documentations?
- Is you question a frequently asked one?
- Try searching our GitHub Issues (open and closed) for a similar issue.
-
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Required Info | |
---|---|
Camera Model | L515 |
Firmware Version | 01.05.08.01 |
Operating System & Version | Ubuntu |
Kernel Version (Linux Only) | 5.4.0-122-generic |
Platform | PC |
SDK Version | 2.50.0 |
Language | C++ |
Segment | Robot |
Issue Description
<Describe your issue / question / feature request / etc..> I had used RS2_STREAM_COLOR to align depth to color and I wanted to use rs2_deproject_pixel_to_point to get 3D point. But when I cout X, it's about 2.47(actually about 0.7), while Z is normal. After that I learned from https://github.com/IntelRealSense/librealsense/issues/4315#issuecomment-506428553 that use aligned intrinsics should get the right results, even so it cout 1.73. Here is my code:
//-- Wait for frames from the camera to settle for (int i = 0; i < 5; i++) { //Drop several frames for auto-exposure frameSet = pipe.wait_for_frames(); }
auto process = align_to_color.process(frameSet);
rs2::video_frame colorFrame = process.get_color_frame();
rs2::depth_frame alignedDepthFrame = process.get_depth_frame();
rs2::stream_profile dprofile = alignedDepthFrame.get_profile();
rs2::stream_profile cprofile = colorFrame.get_profile();
rs2::video_stream_profile cvsprofile(cprofile);
color_intrin = cvsprofile.get_intrinsics();
rs2::video_stream_profile dvsprofile(dprofile);
depth_intrin = dvsprofile.get_intrinsics();
depth2color_extrin = dprofile.get_extrinsics_to(cprofile);
color2depth_extrin = cprofile.get_extrinsics_to(dprofile);
float realPoint[3];
float depthPixel[2];
depthPixel[0] = col;
depthPixel[1] = row;
float depth = depthImage.ptr<ushort>(row)[col] / 4000.f;
rs2_deproject_pixel_to_point(realPoint , &depth_intrin, depthPixel ,depth);
Is there something wrong I did?
BTW, can I catch 3d value on viewer to check if my results is correct?
Hi @FeiSong123 I do not see a problem in your code. Does the error in X increase as the coordinate that you are checking is further away from the center of the image, like the L515 case of XY error at https://github.com/IntelRealSense/librealsense/issues/9749 that also used rs2_deproject_pixel_to_point
You can check XYZ coordinates in the RealSense Viewer by going into the 3D pointcloud mode and moving the mouse cursor over points on the pointcloud. The real-world XYZ values will be displayed in text at the bottom of the center panel of the Viewer.
I change col to 320(my Image is 640x480) and X cout 0.595112, it seems like that error in X increase as pixel is further away from the center of the image.
Sorry to say that my viewer doesn't seem to show real-world XYZ values when I am moving the mouse cursor over points on the pointcloud.
Have you zoomed into the pointcloud with the mouse scroll wheel? That can cause the XYZ readout to disappear once you zoom in past a certain magnification and it reappears when zooming out with the scroll wheel.
The XYZ readout will also disappear if the cursor is moved over a part of the image where there are no depth values.
I turn on by realsense-viewer -> depth sensor -> color sensor -> move mouse on pointCloud, but it does't appear text like on 2D mode.
It is unusual that the 3D pointcloud mode of the Viewer is not displaying coordinates. Let's focus on the Python X coordinate issue instead.
I would usually suggest using aligned intrinsics to improve accuracy, and you mention that yourself at the start of this case. The alternative is to use the intrinsics of the stream that you are aligning to. So if you are performing depth to color alignment, use the color frame intrinsics.
I have try as you say to use intrinsics with rs2_deproject_pixel_to_point, base on depthImg align to colorImg. What is diferent is that whether depth intrinsics align or not. As for color intrinsics, whether it align or not, 3D point is correct. As for depth intrinsics, if aligned, 3D point is correct; else incorrect. In my submission, which intrinsics to use with rs2_deproject_pixel_to_point for correct 3D point depends on the consistence(depends on alignment) of object' pixel in two image. If yes, use color intrinsics; if not, use depth intrinsics(not aligned) after obtaining depthPixel by rs2_project_color_pixel_to_depth_pixel from colorPixel. I think align also align intrinsics from one to the other. Is there a problem with my understanding ?
As for the previous accuracy error, one reason is that I use depth intrinsics(not aligned), the other is because some low-level mistakes I made, I'm sorry for interrupting you.
It's no trouble at all. Thanks very much for your patience!
If using depth to color alignment, use the color intrinsics.
If using color to depth alignment, use the depth intrinsics.
When performing alignment with the align_to instruction, the RealSense SDK's align processing block automatically adjusts for differences in the field of view (FOV) size and resolution of the depth and color sensors. More information about this can be found in the opening paragraph of the documentation for the SDK's C++ rs-align example program.
https://github.com/IntelRealSense/librealsense/tree/master/examples/align#overview
OK, I'll learn more from the routine. I have another question here. I also learned from this https://github.com/IntelRealSense/librealsense/issues/4536#issuecomment-516985209 that "Depth aligned to color" is the position of the RGB sensor. Does this mean that origin coordinate align to RGB sensor from depth sensor, especially 'Z' -4.5 mm Front Glass.
Intel's white-paper guide about projection and texture mapping states that when using deprojection and mapping RGB to a point cloud, the color sensor becomes the new origin.
https://dev.intelrealsense.com/docs/projection-texture-mapping-and-occlusion-with-intel-realsense-depth-cameras#32-texture-mapping
On cased RealSense models with front glass, depth measurement starts at the front glass instead of from the sensor lens. If you want to find the ground truth measurement of an L515 then you can add -4.5 to the depth measurement.
That is, X,Y are converted from IR transmitter to RGB camera, but Z still need to be added -4.5 if I use depth align to color or pointCloud and want to find the ground truth measurement ?
Yes, add -4.5 millimeters to the depth measurement if you want to find the ground truth measurement. The majority of RealSense users just use the measurement from the front glass though.
OK, thanks for your patience and assistance !!!