behave-dataset
behave-dataset copied to clipboard
Alignment of color and depth
Thank you very much for the method of making the dataset, but I'm having some problems making the data myself.
I use the color, depth in the BEHAVE data and the color in the calibration (cx, cy, fx, fy, k1~k6, p1, p2). Project the depth to pcd, and use color img to color the point cloud. The results are as follows, you can see very good results.
But when I record the kinect data on my own, I get the following results.
You can see that the color of part of the human body is on the wall, which means that there is a problem with the corresponding depth of this part of the color, that is, there is a misalignment between the color and the depth.
My color(get_color_image()
) and depth(get_transformed_depth_image()
) are read using kinect built-in functions, including all parameters used, which are also from kinect SDK(get_calibration()
).
So I want to ask a few questions:
- Are the color and depth in your data read directly from kinect (similar to the function I use)? Or complete the calibration of color and depth through parameters (calibration.json's color, colortodepth, depth)?
- Do you use additional calibration methods? Or do all matrix parameters come from kinect's SDK? If there is additional calibration, how is it done?
- The issue of color and depth registration is also discussed here. As of 2021 it remains unanswered. Does your device have this kind of problem? If not, do you have any suggestions to solve the problem I am facing?
Looking forward to your reply