isaac_ros_nvblox icon indicating copy to clipboard operation
isaac_ros_nvblox copied to clipboard

Nvblox with realsense or any depth camera on Jetson Xavier AGX

Open Aki1608 opened this issue 3 years ago • 12 comments

This is more like a question than an Issue. I just wanted to know if nvblox algorithm has been tested on AGX or any Xavier with realsense or any other camera to recreate a 3D environment IRL(in real life)? Or is it tested in Simulation only? Why I am asking is because when I try to recreate a 3D environment, the re-construction is not much accurate (in comparison to the Isaac Sim result.) Also, if you have tested it IRL then I will try and play with the different parameters to get better output.

Aki1608 avatar Jul 05 '22 11:07 Aki1608

We've tested it in real life on exactly that set-up. :) Unfortunately real sensors aren't as good as isaac sim, so there's quite a bit of limitation on the data quality there. Some settings in nvblox that might help:

    tsdf_integrator_max_integration_distance_m: 4.0
    tsdf_integrator_max_weight: 20.0

Setting the max distance shorter (for stereo cams/RealSense the error at larger distances from the camera gets quite large) and a lower max weight to better deal with non-static scenes.

Another thing that's quite important is that the projector should be on for the RealSense, which should greatly increase the quality of the depth cam inputs. The quality of your poses is also important. Hope that helps!

helenol avatar Jul 05 '22 12:07 helenol

@helenol Do you have any comparative study or visuals showing how much 3D reconstructed scene deviates/degrades using physical cameras compared to Simulated 3D reconstruction?

Also, which manufacturer cameras have you tested ? which worked reasonably well ?

naitiknakrani avatar Jul 05 '22 13:07 naitiknakrani

@naitiknakrani Depends on the camera used/quality of the depth/structure of the scene and a million different factors. The sim input data is perfect so it's an upper-bound on real life performance.

We use the RealSense D455 which we quite like due to the wide FoV. The D435 also works well. We've also used the ZED2 camera but found that the low-texture performance (i.e., flat white walls) wasn't as good as the RS, partially due to lack of texture projector.

helenol avatar Jul 05 '22 14:07 helenol

Thanks for the update.

naitiknakrani avatar Jul 06 '22 05:07 naitiknakrani

@helenol Thanks that was indeed helpful.

I have one question though. You said that the projector should be on.

Another thing that's quite important is that the projector should be on for the RealSense, which should greatly increase the quality of the depth cam inputs.

But when we run nvblox, it seems that it is not on. Can you tell us which parameter we have to use to turn it on? and where we have to add that parameter?

Aki1608 avatar Jul 06 '22 06:07 Aki1608

That depends on how you run the realsense. Which command do you use to launch the realsense, and which version of ROS2 are you running?

alexmillane avatar Jul 06 '22 17:07 alexmillane

@alexmillane We are facing few challenges working with realsense. We are running it on ROS2 foxy and we are invoking realsense2_camera node by adding it into nvblox_nav2 launch files https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/tree/main/nvblox_nav2/launch. We have modified carter_sim.launch.py with its necessary parameters.

We have applied one parameter for projector on that's depth_module.emitter_enabled: true, however that ruins on-chip-calibration done before launching realsense camera node. After calibration 3D reconstruction becomes messy, and odometry degrades heavily.

naitiknakrani avatar Jul 07 '22 05:07 naitiknakrani

Hi @helenol,

was the new version of docker (ros2 humble) also tested with realsense camera? Were you able to access realsense camera inside the docker? We were able to install librealsense SDK and run realsense-viewer, but it cant find any device. When we open realsense-viewer, it shows error:

(handle-libusb.h:51) failed to open usb interface: 0, error: RS2_USB_STATUS_NO_DEVICE
 (sensor.cpp:572) acquire_power failed: failed to set power state
(rs.cpp:310) null pointer passed for argument "device"
(rs.cpp:2691) Couldn't refresh devices - failed to set power state

I also tried copying the 99-realsense-libusb.rules to /etc/udev/rules.d and run, sudo udevadm control --reload-rules, but it shows running in chroot, ignoring request.

Aki1608 avatar Jul 21 '22 05:07 Aki1608

Hi @helenol, Solved this issue of the realsense camera inside the humble container. We just ran sudo udevadm control --reload-rules outside the container and now the camera is working inside the docker as well.

Aki1608 avatar Jul 21 '22 08:07 Aki1608

Hi @Aki1608 and @naitiknakrani. Thank you for the updates. I can confirm that we are using the realsense with quite a bit of success. We have a release coming in about a month, in which we'll include some examples and documentation about how to get it going. I guess waiting a month isn't optimal, but I hope it will be helpful when we're able to release it.

alexmillane avatar Jul 25 '22 07:07 alexmillane

Thanks @alexmillane and @helenol for the response. We will be happy to see your test results with realsense.

naitiknakrani avatar Jul 25 '22 12:07 naitiknakrani

@helenol @hemalshahNV so is there any solution to get inputs from zed visualized with color information on Orin devkit? Thanks

AndreV84 avatar Aug 12 '22 13:08 AndreV84

@alexmillane Hi, can you please share your results and findings for the nvblox with realsense. We are doing rigorous testing with realsense, hence we would like to compare our results with the benchmarks.

Also, there is one important point I want to ask. For the nvblox, Pose estimator (pose) is an input. however, its not published by anyone and even without using "pose" as input, nvblox works. (we have tested it in Isaac Sim). So what is the intend for using the pose as input to nvblox node. If it is important from which source it should come? (i.e. from odom,vslam,imu ???)

Please share your thoughts on it.

naitiknakrani avatar Aug 18 '22 14:08 naitiknakrani

Please see the example combining the realsense, vslam and nvblox which is now available.

alexmillane avatar Sep 02 '22 11:09 alexmillane

@alexmillane Thanks for the update. I have one small question. While using Intel Realsense, was its on-chip calibration performed? We found that everytime realsense camera is plugged in on-chip-calibration is required or else the performance was very poor. was it same at your end ?

naitiknakrani avatar Sep 05 '22 12:09 naitiknakrani

What is the purpose of creating a realsense splitter node ?

naitiknakrani avatar Sep 06 '22 06:09 naitiknakrani

We did not have the same experience with the calibration. We've never had to re-calibrate it from the factory settings. That seems quite strange.

Regarding the splitter. We configure the realsense to trigger the projector on/off on alternating frames (the projector is on, off, on, off, etc). Frames with the projector off are required for vslam, while the depth frames with the projector on are required for nvblox. The splitter node subscribes to the raw on/off image streams and splits the images appropriately: infra topics have projector off, and the depth topic has the projector on.

I hope that's helpful.

alexmillane avatar Sep 06 '22 20:09 alexmillane

@alexmillane Thank you very much for all the information. This splitter node functionality is very much essential, because librealsense sdk v.2.50 or greater shuts off projector while doing calibration. Its an official response from Intel people. Kindly refer https://github.com/IntelRealSense/librealsense/issues/10638 for the challenge.

But anyway thanks for help.

naitiknakrani avatar Sep 07 '22 07:09 naitiknakrani