isaac_ros_nvblox icon indicating copy to clipboard operation
isaac_ros_nvblox copied to clipboard

Using Point Cloud Data

Open RMichaelSwan opened this issue 2 years ago • 5 comments

The pipeline seems to support RGB-D cameras quite well. That being said, how difficult would it be to adapt it to ingest point clouds (which may or may not contain RGB information) instead? I'm thinking something similar to what is possible with voxblox.

RMichaelSwan avatar Apr 26 '22 14:04 RMichaelSwan

Hey @RMichaelSwan great question. :) The catch is how we GPU-accelerate the integration: we actually rely on getting the depth data as a 2D image and exploit adjacency in pixel space, so it's not as trivial as it was in voxblox (where we actually fully rely on getting full pointclouds in and don't exploit the structure of the data at all). On our "future features" list is definitely ingesting 3D pointclouds originating from LIDAR as 2D range images ( example: https://www.mdpi.com/2079-9292/10/11/1224/pdf ), so you can expect this feature sometime in the future.

There's also been some discussion on how to do at least the LIDAR -> range image parts in ROS already; https://discourse.ros.org/t/from-360-degree-sparse-point-cloud-lidar-pointcloud2-msg-to-depth-image/3318/2 so it might not be too difficult to try out the current state if you can parse your LIDAR into depth images. If you try this approach please let us know, and let us know if you hit blockers! I expect there to be 1 or 2 things with the camera models we didn't anticipate.

Just in regards just to color, color is totally optional. If you use color to False in the settings you should just get the mesh colored by normals (same as in voxblox).

helenol avatar Apr 26 '22 15:04 helenol

Thank you, this is very helpful! I will try the LiDAR -> range image conversion soon and let you know here how it goes. Might be especially interesting as I have a dataset which uses multiple LiDARs.

RMichaelSwan avatar Apr 26 '22 16:04 RMichaelSwan

@helenol Status update: So I was able to convert my LiDAR (from a Velodyne VLP16 set to output a structured pointcloud) to a disparity image using pcl_ros. However, it depends on there being an rgb field existing in the point cloud data (I'm guessing the pcl_ros code was written for converting RGBD generated pointclouds to Images). For the moment I just changed my intensity channel to "rgb" to see what it would do. It seems to work fine, though would likely be better if I changed the code to ignore rgb data or mangled the array to provide a black or white rgb channel. I might try this other repo if I have time.

In any case, the other issue I am having is that it seems the resolution of the LiDAR is likely not being used well by nvblox. The LiDAR reports an 1812x32 dimension (height x width, though these should likely be swapped) and the points don't seem to result in any output on the nvblox side in terms of mesh or pointcloud updates. A sample of the generated disparity is shown. I'm thinking the rgb channel is causing issues.

lidar_to_disparity

Two other asides:

  1. When launching carter_sim.launch.py is it supposed to regularly produce a costmap or is this something that must be requested through the service? I am not using IsaacSim, just running the stack with my own rosbag over ros2.
  2. What is the expected costmap production time? I was also able to run some tests using a stereo camera with disparity in a large outdoor environment (100's of meters of travel), but the latency seems to steadily increase the longer it runs (tsdf taking multiple seconds according to the nvblox Timer output after a few minutes). I do not want to use this camera for my usecase for map representation as the LiDAR range is much further (up to 200m).

RMichaelSwan avatar May 04 '22 23:05 RMichaelSwan

I was able to convert the LiDAR to disparity more correctly per this. Attached is LiDAR output in case you're curious (it's sitting in a vehicle outdoors). However, note that nvblox requires a camera_info topic for "calibration" to "camera" intrinsics. I used some dummy values to verify it was working and got output from nvblox, though it is not sensible (probably used incorrect distortion numbers for LiDAR--I don't think intrinsics should be required for LiDAR anyway?).

Screenshot from 2022-05-18 10-17-27

In general I think more work is needed to ingest LiDAR data properly and I'm not happy with the tsdf timing (I need it to be less than 1 second) as the run continues past a minute or so regardless (probably need to use a local costmap only with a rolling buffer for my outdoor mapping usecase). For these reasons I'm going to drop this for now.

RMichaelSwan avatar May 18 '22 17:05 RMichaelSwan

Hi, this new feature have clear publish date? thank you. @alexmillane

Yeah2333 avatar Aug 19 '22 07:08 Yeah2333

Apologies for the delay. This feature is now released. The node subscribes to the pointcloud topic. The best way to try this out is to launch carter_sim example which run in Isaac Sim.

When you want to try it on real data, one thing to look out for is that the params lidar_width, lidar_height, and lidar_vertical_fov_rad have to be set correctly.

Hope this works for you :)

alexmillane avatar Oct 25 '22 19:10 alexmillane