Azure_Kinect_ROS_Driver icon indicating copy to clipboard operation
Azure_Kinect_ROS_Driver copied to clipboard

Questions: how to capture/store the depth image for extrinsic calibration

Open ghost opened this issue 5 years ago • 1 comments

Hello,

Currently we are trying to calculate the transformation matrix between the camera frame and a given fixed frame. I took a look at this issue (here) and the author shared his repo where his depth images here are stored as .npy.

  1. When I use img = np.load('image.npy') and plt.show(img, cmap='gray'). The image is attached as below. My first question is that how could this depth image has the dimension of (3072, 4096)? The author used the WFOV unbinned mode for depth and I believe the resolution is (1024, 1024). image.

  2. I guess the second question is related to the third question. It looks like the depth image has been undistorted. I wonder what might be the reasons for doing this? I thought we need to re-project the rgb image to the depth image. I don't understand why we need to un-distort the depth image.

  3. If I want to perform extrinsic calibration. I'm under the impression that I should take the rgb_to_depth images and depth images. Why the author is using the rgb and depth images?

Thank you.

ghost avatar Feb 18 '20 00:02 ghost

If you are just interested in the intrinsic calibration without storing the colour and depth images, then https://github.com/microsoft/Azure_Kinect_ROS_Driver/pull/200 might help you.

christian-rauch avatar Jul 14 '21 09:07 christian-rauch