robomimic
robomimic copied to clipboard
Transform pixel point into world but get tilted point cloud
Hi, I want to transform all points in depth map from the camera frame to the world frame. So I use the function get_camera_info
in dataset_states_to_obs.py
to get the intrinsics and extrinsics matries of the camera agentview
and robot0_eye_in_hand
and generate depth map with this script too. Then I write a function below to complete the transformation:
def pixels_to_world(K, Q, depth_map):
height, width = depth_map.shape
# generate mesh grid
y_coords, x_coords = np.meshgrid(np.arange(height), np.arange(width), indexing='ij')
pixel_coords = np.stack((x_coords, y_coords), axis=-1)
# transform point from pixel frame into world frame
normalized_coords = np.dot(np.linalg.inv(K), np.append(pixel_coords, np.ones((height, width, 1)), axis=-1).reshape(-1, 3).T)
camera_coords = normalized_coords[:2, :].T * depth_map.flatten()[:, np.newaxis]
camera_coords = np.hstack((camera_coords, depth_map.flatten()[:, np.newaxis], np.ones((height*width, 1))))
world_coords = np.dot(np.linalg.inv(Q), camera_coords.T)
return world_coords[:3, :].T.reshape(height, width, 3)
I replace the function transform_from_pixels_to_world
in script test_camera_transforms.py
(both of two are in 'robosuite') with my function. And I find the test result is good, so I believe the function can work well.
However, when I try to perform the transformation in robomimic and visualize the transformation result, it looks weird.
Corresponding observation of
agentview
:
As you can see, the point cloud is tilted. And the point cloud of agentview
(left in picture) is tilted about 45 degrees. It seems that the points is only transformed into camera coordinate but not the world coordinate.
Otherwise, it's so weird since the world coordinate should be set on the robot's base and should not be inclined. I don't know why. Could you help me? Thanks!