Converting depth map into point cloud in the world reference frame
Hi, I am having some trouble converting the depth map produced using pyrender into a point cloud in the scene. I have a camera and renderer created via:
camera = pyrender.PerspectiveCamera( yfov= 60./ 180.0 * np.pi, znear=0.0001, zfar =10.)
r = pyrender.OffscreenRenderer(120, 120)
and I have initialized the camera's position via:
pose = np.eye(4)
pose[:3, 3] = position
pose[:3, :3] = orientation
camera.matrix = pose
where position is a 3D coordinate , and orientation is a 3x3 rotation matrix. I then render the scene and produce a color image and a depth map:
color, depth = r.render(scene)
How do I convert this depth map into a point cloud in the reference frame of the world (ie these points should lie exactly on the surface of whatever object the rays touch in the pyrender scene)? This is what I have so far:
# create grid of points for x and y coordinates
ys = np.arange(0,121)
ys = np.tile(ys, (121, 1)) - 60
ys = np.delete(ys, 60, axis =0 )
ys = np.delete(ys, 60, axis=1)
xs = ys.transpose()
fov = 60./ 180.0 * np.pi
# update grid with depth
point_cloud = np.zeros((120, 120, 3))
angle = np.arctan((np.abs(xs) / 60) * np.tan(fov / 2)) # fov angle for a give x coordinate
point_cloud[:, :, 0] = -depth * np.tan(angle) * np.sign(xs)
angle = np.arctan((np.abs(ys) / 60) * np.tan(fov / 2)) # fov angle for a give y coordinate
point_cloud[:, :, 1] = -depth * np.tan(angle) * np.sign(ys)
point_cloud[:, :, 2] = -depth # negative for -z camera direction
point_cloud = point_cloud.reshape((-1, 3))
point_cloud = orientation.dot(point_cloud.T).T + position
This issue is that when I view the resulting point cloud in blender with the original objects in the scene they do not line up. Can anyone show me why this isn't working? I think this is both a scale issue from the FOV and an issue with how I recover the x and y coordinates with the given depth. Any help would be very appreciated.