isaac_ros_nvblox icon indicating copy to clipboard operation
isaac_ros_nvblox copied to clipboard

Rgbd mapper and integrator parameters for the Intel realsense cameras

Open arainbilal opened this issue 2 years ago • 5 comments

Hi @helenol, From your experience using the Intel realsense cameras in indoor environments, just wondering if you may be able to help elaborate on the following rgbd mapper parameters, which I am trying to tune to get the best possible results using the D455/D435 cameras to compare with the Sun3D dataset.

Parameter | value | Remarks

  • voxel size | 0.05 | The size of the tsdf voxels
  • max_integration_distance | 4.0 | Max distance in the depth frame in meter; parameter is defined in projective_integrator_base.h
  • truncation_distance_vox | 4.0 | Truncation distance for the TSDF; (In the paper it is set to 4*v)
  • integrator max weight | 1e4 | The upper limit for the weight assigned to a voxel
  • integrator min weight | 1e-4 | the lower limit for the weight assigned to a voxel
  • weld vertices are set to false.

In addition to the integrator parameter, I have used the following:

  • esdf is set to false and only updating mesh using converter_.meshMessageFromMeshBlocks(). therefore slice height and min/max height is not used.

Where possible, I have looked into the source code and the readme files but another look or more explanation would be greatly appreciated. The example provided using the Sun3D dataset using Asus Xiton Pro seems to produce better results than the realsense and I am not sure why this is the case. Note that the depth image encoding is 16UC1 from the realsense sensor and therefore the converter performs the division. Thanks in advance.

arainbilal avatar Aug 16 '22 12:08 arainbilal

The quality of the map depends on the quality of the input depth maps. Do you have the projector enabled on your realsense?

We will release example launchfiles and code for building reconstructions from a realsense camera in the next couple of weeks. We're in the final stages of preparing this.

It's not ideal to have to wait a couple of weeks, however, but once we release this I think it will be easier to talk about an example application we're both running.

alexmillane avatar Aug 25 '22 13:08 alexmillane

Our realsense example is now live. Let us know if you find it useful

alexmillane avatar Sep 02 '22 11:09 alexmillane

Thanks, @alexmillane. Question about the realsense splitter node. Emitter is enabled for the depth image (https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/2c6686676a758b16677b7a143a018a4e29630db3/nvblox_examples/realsense_splitter/src/realsense_splitter_node.cpp#L136), but emitter is off for the localisation (https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/2c6686676a758b16677b7a143a018a4e29630db3/nvblox_examples/realsense_splitter/src/realsense_splitter_node.cpp#L120). If emitter is enabled, infra1/2 from the splitter will not publish? Which means, the pose is not available (https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/2c6686676a758b16677b7a143a018a4e29630db3/nvblox_examples/nvblox_examples_bringup/launch/nvblox_vslam_realsense.launch.py#L91).

Question: Just wondering if both nodes (realsense_splitter_node and visual_slam_node) can run simultaneously using the current implementation. Maybe I have missed something. Thanks in advance for your help.

arainbilal avatar Sep 15 '22 11:09 arainbilal

@alexmillane or @helenol do you have any updates on this?

swapnesh-wani-nvidia avatar Oct 10 '22 22:10 swapnesh-wani-nvidia

Our example configures the realsense to turn the emitter on and off on alternating frames. Half of the frames for to VSLAM (emitter off) and half go to nvblox (emitter on). Using this setup nvblox and VSLAM are able to run simultaneously :).

alexmillane avatar Oct 11 '22 08:10 alexmillane

Please close this issue. Thanks.

arainbilal avatar Oct 26 '22 07:10 arainbilal