pyrender
pyrender copied to clipboard
Extract point cloud (potential feature request)
Is there a way to get a point cloud of a scene?
If not, I think this would be a useful feature.
It's perfectly possible to do this by simply deprojecting the depth image into 3D points. I'll mock up a bit of code that does this and put it in a gist.
That would be great. Why not make it a function in the Python API (instead of a gist)?
I'll do that and add it to the camera classes. Will try to write it up tonight.
This is the code I wrote for generating point cloud of the scene. It assumes the aspect_ratio is 1. You can modify it to work with any aspect ratio:
def pointcloud(depth, fov):
fy = fx = 0.5 / np.tan(fov * 0.5) # assume aspectRatio is one.
height = depth.shape[0]
width = depth.shape[1]
mask = np.where(depth > 0)
x = mask[1]
y = mask[0]
normalized_x = (x.astype(np.float32) - width * 0.5) / width
normalized_y = (y.astype(np.float32) - height * 0.5) / height
world_x = normalized_x * depth[y, x] / fx
world_y = normalized_y * depth[y, x] / fy
world_z = depth[y, x]
ones = np.ones(world_z.shape[0], dtype=np.float32)
return np.vstack((world_x, world_y, world_z, ones)).T
I'll do that and add it to the camera classes. Will try to write it up tonight.
Hi, I haven't found any code which deprojects a point from the image plane to the 3d world in the camera class. In addition, all the codes which I found didn't use the [R|t] outer calibration matrix. Shouldn't we use it also?
Hi everyone!
Is true is not easy to see, however, you can access to the point cloud from the node on the scene. For example:
# Setting the scene
scene = pyrender.Scene(bg_color=[0,0,0],ambient_light=np.array([0,0,0]))
nc = pyrender.Node(camera=camera)
scene.add_node(nc)
nm = pyrender.Node(mesh=mesh)
scene.add_node(nm)
#Extracting the cloud points
points = scene.get_nodes(node=nm)
points = next(iter(points))
points = points.mesh.primitives[0].positions
In case you have a rotation on the mesh, you just need to apply the same rotation to the point cloud. Create a new scene, a new mesh from the extracted points and set_pose
Hope it helps!
To who might need this. @arsalan-mousavian 's answer is excellent, but to properly use it, you might need to pay attention to
(1) y-axis from numpy array and 3D coordinate are reversed. In my case I do normalized_y = -normalized_y
(2) z-axis is by default pointing negative z. In my case I do world_z = -depth[y, x]
(3) if the rendered image is not a square shape like 640*640, you have to incorporate aspectRation. In my case I do
fy = 0.5 / np.tan(fov * 0.5)
fx = fy / aspectRation
This is the code I wrote for generating point cloud of the scene. It assumes the aspect_ratio is 1. You can modify it to work with any aspect ratio:
def pointcloud(depth, fov): fy = fx = 0.5 / np.tan(fov * 0.5) # assume aspectRatio is one. height = depth.shape[0] width = depth.shape[1] mask = np.where(depth > 0) x = mask[1] y = mask[0] normalized_x = (x.astype(np.float32) - width * 0.5) / width normalized_y = (y.astype(np.float32) - height * 0.5) / height world_x = normalized_x * depth[y, x] / fx world_y = normalized_y * depth[y, x] / fy world_z = depth[y, x] ones = np.ones(world_z.shape[0], dtype=np.float32) return np.vstack((world_x, world_y, world_z, ones)).T
this code is great. But how I can view those point clouds? how can I project them onto screen from image plane? because using matplotlib.imshow -> I do not get points but I still get pixel values .
Thank you in advance
Hi! Is there a way to generalize this to any camera? perhaps using the projection matrix camera.get_projection_matrix() function? Thanks!
For the sake of completeness, I have included all the elements in a basic point cloud rendering script in the following gist for future wanderers to this issue. Hope it helps!
https://gist.github.com/kuldeepbrd1/ccb2b3f8e8ee6ff16698749c4450a823