habitat-lab icon indicating copy to clipboard operation
habitat-lab copied to clipboard

Depth sensor readings, depth sensor range, and point cloud

Open yhymason opened this issue 2 years ago • 3 comments

Hi there, I have a few questions regarding the depth camera in Habitat-lab: 1, I am using depth images and open3d's interfaces as following to create point cloud objects: depth_img = o3d.geometry.Image(depth_as_np) fx = camera_info_k[0][0] fy = camera_info_k[1][1] cx = camera_info_k[0][2] cy = camera_info_k[1][2] camera_intrinsic = o3d.camera.PinholeCameraIntrinsic( camera_info.width, camera_info.height, fx, fy, cx, cy ) pcd = o3d.geometry.PointCloud().create_from_depth_image( depth=depth_img, intrinsic=camera_intrinsic, extrinsic=np.identity(4), depth_scale=1000.0, depth_trunc=1000.0, stride=1, ) However, the point cloud generated looks too deterministic and different from what we normally expect. See the images linked below: result aligncolortodepth_ex1a The first image is a point cloud generated in Habitat's test scene and the second is a Matlab point cloud example. As you can see, the Matlab example has a lot more randomness in terms of point coordinates. I wonder why the point cloud generated using Habitat's depth image looks different.

2, I also wonder there's a way to limit the depth camera's FOV in its depth range and vertical axis. Since I am not interested in objects that are located on the ceiling or are far away from the robot itself.

yhymason avatar Oct 19 '21 05:10 yhymason

I'm not familiar with Open3D, but depth_scale=1000.0 seems odd. Habitat-lab outputs depth in [0,1] by default where a value of 1 represents something 10m away, so maybe depth_scale=10.0?

  1. Not sure where that source image is from, but it looks like it's from a real camera. A simulate camera (without a noise model) will be perfect and things will be very regular.

erikwijmans avatar Oct 20 '21 01:10 erikwijmans

@erikwijmans thanks, according to this doc the depth image will be scaled first and truncated after. leaving the values default will let the image remain unchanged. I intend to leave the points in Habitat's world coordinate system so I guess I should not scale the depth values or truncate them. Good point about the simulated camera, I guess the depth camera sensor in Habitat is by default not a noisy camera. So may I should add a small gaussian noise vector to each point in the point cloud to make it look like real camera data? Also, https://aihabitat.org/docs/habitat-sim/habitat_sim.sensor.CameraSensor.html shows there are properties like [far, near, far_plane_dist, near_plane_dist] and a function [zoom]. I wonder how I can make use of these to achieve the FOV I want. Additionally, I found that in habitat-lab's defaults.py, there are constants like [MAX_DEPTH, MIN_DEPTH] for cameras, I wonder how they are used, and what they can do.

yhymason avatar Oct 20 '21 01:10 yhymason

We current support the Redwood depth noise model (http://redwood-data.org/indoor/dataset.html). That can be added by config.SIMULATOR.DEPTH_SENSOR.NOISE_MODEL = "RedwoodDepthNoiseModel".

The easiest way to achieve the FOV you want is to set the FOV, i.e. config.SIMULATOR.DEPTH_SENSOR.HFOV = <horizontal field of view in degrees>.

MAX_DEPTH and MIN_DEPTH do what their names imply. They clip depth values to be in [min, max]. If NORMALIZE_DEPTH is true, that interval is normalized to [0,1]. You'll need to undo that normalization to recover world coordinates.

erikwijmans avatar Oct 20 '21 16:10 erikwijmans

@yhymason Feel free to re-open the issue, if you still have questions.

rpartsey avatar Sep 12 '22 13:09 rpartsey