robotology-superbuild icon indicating copy to clipboard operation
robotology-superbuild copied to clipboard

Document and support installation of software that depends on librealsense

Open traversaro opened this issue 4 years ago • 13 comments

The Intel realsense devices are quite used in the robotics world, and there are plan to include them as an add-on on the iCub robot. However, the necessary software dependencies are not available from standard repos on Debian/Ubuntu, so it could make sense to have a dedicate profile of optional dependency (see https://github.com/robotology/robotology-superbuild/blob/master/doc/profiles.md) for them.

traversaro avatar Dec 17 '20 11:12 traversaro

The situation seems to be easy on Windows and macOS: librealsense is a library that is available in vcpkg and in homebrew:

  • https://github.com/microsoft/vcpkg/tree/master/ports/realsense2
  • https://formulae.brew.sh/formula/librealsense

The situation is more tricky on Linux:

  • No official librealsense package is in the upstream Debian/Ubuntu repos
  • Intel provides official .deb packages repos in https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages , and there are some of the packages that do things that is not easy to do if you build it from source, such as installing udev rules and doing dkms
  • Intel also provides .deb packages of librealsense as part of the ROS repositories (https://index.ros.org/p/librealsense2/), and those are not compatible with the one provided in the non-ROS repo

So, before deciding on the best strategies, I have two questions to realsense users:

  • Q1: Which strategies are you currently using to install librealsense on Ubuntu? From source, Intel official non-ROS repo or ROS repo?
  • Q2: Are you aware if the udev and dkms integration are useful or we can also think of compiling librealsense from source and not use them?

@Nicogene @xEnVrE @prashanthr05 @lnobile @vvasco (or any other realsense user) I think that if you could answer the questions Q1 & Q2 it would be quite useful!

traversaro avatar Dec 17 '20 15:12 traversaro

Hi @traversaro,

I'll try to answer Q1 according to my experience.

First of all, I tend to compile the library from source because it seems that the default build options (hence the same I am expecting the package is compiled with e.g. in the provided deb packages) require patches to the driver itself in order to support the Linux kernel and tend to produce unreliable behaviors (e.g. you run the yarpdev for the camera and it closes down after a while reporting, librealsense-wise, that frames were not available for a pre-defined maximum amount of seconds). In this case you end up disconnecting and connecting back the camera, until it works. I want to stress that this is not an issue from the yarpdev side.

Instead, what I found to be working more reliably is to compile from source specifying the option FORCE_RSUSB_BACKEND=ON in CMake. If I am not wrong, the so-called RealSense USB Backend is the tentative from Intel to support linux Kernels without any patch to the driver (if I am not wrong, it basically avoids relying on video4linux2 API kernel-side and uses libusb + libuvc user space-side instead). In the past, I used this compile option also to solve other annoying bugs e.g. images freezing every so often while using the camera. With this option I also avoided the aforementioned issue where the driver basically does not receive frames for more than a predefined time and then closes itself.

Some warnings:

  • in the past compiling w/ RSUSB caused high CPU consumption. Two related issues are https://github.com/robotology/yarp-device-realsense2/issues/1 and https://github.com/IntelRealSense/librealsense/issues/5310. It seems that it has been solved starting from release 2.3.4.0. I also made some tests (results are in https://github.com/robotology/yarp-device-realsense2/issues/1);
  • if I am not wrong, using RSUSB caused some problem with the handling of the IMU timestamps (e.g. on D435i models). If I am able to find the associated issue in the librealsense repository I will report it back.

xEnVrE avatar Dec 17 '20 15:12 xEnVrE

I wanted to add that compiling from sources also enable us to decide other two important things. Since we typically use RGB + Depth (and we need to align them) and since the alignment process can be CPU consuming, the librealsense driver allows using parallelized version of the alignment process:

  • one uses OpenMP CPU-side (and can be enabled with a specific option in CMake)
  • one uses CUDA GPU-side (and that too can be enabled with a specific option in CMake)

Honestly, I don't know what are the default build options adopted in the provided deb packages. Since not everybody is using CUDA and since the OpenMP implementation could lead to very good performance (at the expense of a really high CPU usage) I think that both are OFF by default. Being able to decide which to use from the superbuild would be awesome!

xEnVrE avatar Dec 17 '20 15:12 xEnVrE

Q1: Which strategies are you currently using to install librealsense on Ubuntu? From source, Intel official non-ROS repo or ROS repo?

On my local machine, I installed it from the source. In this case in order to setup the udev rules I was prompted to run the script ./scripts/setup_udev_rules.sh when trying to use the device with the realsense-viewer. I don't know if I did the dkms part. Checking my /etc/sources.list.d, looks like I have not added the realsense server to the list of repositories.

prashanthr05 avatar Dec 17 '20 15:12 prashanthr05

Q1 In general, I follow these instructions https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages Only when there were special configurations or issues I compiled the sdk from sources( same as https://github.com/robotology/robotology-superbuild/issues/564#issuecomment-747525076)

Q2 I think they are mandatory for using the physical devices, but I never tried to use them without installing them

Nicogene avatar Dec 17 '20 16:12 Nicogene

Q1

If I'm not mistaken, we did follow https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages. Is that correct @vvasco?

Q2

Never been concerned with this. Perhaps @vvasco has some insights on this.

pattacini avatar Dec 17 '20 17:12 pattacini

I wanted to add that compiling from sources also enable us to decide other two important things. Since we typically use RGB + Depth (and we need to align them) and since the alignment process can be CPU consuming, the librealsense driver allows using parallelized version of the alignment process:

  • one uses OpenMP CPU-side (and can be enabled with a specific option in CMake)
  • one uses CUDA GPU-side (and that too can be enabled with a specific option in CMake)

Honestly, I don't know what are the default build options adopted in the provided deb packages. Since not everybody is using CUDA and since the OpenMP implementation could lead to very good performance (at the expense of a really high CPU usage) I think that both are OFF by default. Being able to decide which to use from the superbuild would be awesome!

In your experience, would it be possible to run the expensive computation in a machine different from the one to which the sensor is physically attached? In other words, is it possible to stream the raw USB output to another machine in the network?

S-Dafarra avatar Dec 17 '20 18:12 S-Dafarra

In your experience, would it be possible to run the expensive computation in a machine different from the one to which the sensor is physically attached? In other words, is it possible to stream the raw USB output to another machine in the network?

I don't know if that is possible but I think it is not (with the standard pipeline). Anyway, please consider that if you only need depth imaging, you can disable the alignment process (it can be easily done from the configuration file of the associated yarpdev).

xEnVrE avatar Dec 17 '20 18:12 xEnVrE

@S-Dafarra starting from 2.34.0 there is also the support for the so-called RealSense Device over Ethernet (see https://github.com/IntelRealSense/librealsense/wiki/Release-Notes#release-2340). I think it allows you to compress data from the machine where the camera is physically attached (and a rs-server is running) and start the rs2 pipeline on another machine on the same network. In this case there is no support from the yarpdev device though, you will need to write your own code. But if you don't need special things, the realsense yarpdev is of course the right solution for streaming rgb and depth data from the camera over the network via YARP ports.

xEnVrE avatar Dec 17 '20 18:12 xEnVrE

Thanks a lot to everyone!

I will try to summarize what I learned, also w.r.t. to Q2.

On Linux librealsense has two backends:

  • The Video4Linux, that is the one official supported but that requires kernel patches to the v4l module, that is handled by the DKMS packages by the official .deb package. This is the backend used by the official .deb packages.
  • The USB video device class, that is not "officially" supported and do not support the use case of multiple camera synchronized, but it can work fine also for non-Linux OS and for Linux distributions for which there is no support for patching the Kernel. This is the backend enable by FORCE_RSUSB_BACKEND option, and use in the ROS binary packages.

An in-depth description of this two options is described in https://github.com/IntelRealSense/librealsense/issues/5212#issuecomment-552184604 .

Regarding udev rules, I am still not sure what and when they are actually needed. However even when librealsense is installed from source, the udev rules can be installed separately (and we could document that, once we understand how to do that properly). Related to that, it seems that for now udev rules are not included in ROS packages of librealsense, see https://github.com/IntelRealSense/realsense-ros/issues/1426 .

Based on this and by the feedback by @xEnVrE, by personal inclination is the following:

  • Add an option named ROBOTOLOGY_USES_REALSENSE, that compiles by default librealsense by source with the option FORCE_RSUSB_BACKEND for maximum compatibility and avoiding the need to patch the kernel
  • Document how to disable librealsense compilation from source and use instead the .deb packages, using the standard options of the superbuild
  • Document that the YCM_EP_ADDITIONAL_CMAKE_ARGS option can be used to pass compilation options to librealsense https://github.com/robotology/robotology-superbuild#how-do-i-pass-cmake-options-to-the-projects-built-by-the-robotology-superbuild-

Let me know if you think it make sense or that we should do something else, thanks!

traversaro avatar Dec 20 '20 21:12 traversaro

On Linux librealsense has two backends:

  • The Video4Linux, that is the one official supported but that requires kernel patches to the v4l module, that is handled by the DKMS packages by the official .deb package. This is the backend used by the official .deb packages.
  • The USB video device class, that is not "officially" supported and do not support the use case of multiple camera synchronized, but it can work fine also for non-Linux OS and for Linux distributions for which there is no support for patching the Kernel. This is the backend enable by FORCE_RSUSB_BACKEND option, and use in the ROS binary packages.

Note that we recently discussed about the possible use of multiple realsense cameras attached to the same machine in the context of the ergoCub project, so for the future we may need to consider supporting the use of the Video4Linux backend. fyi @randaz81 @DatSpace @DanielePucci @pattacini @xEnVrE

traversaro avatar Jun 23 '21 08:06 traversaro

Tagging @triccyx who had worked on the same backend for the UltraPython.

pattacini avatar Jun 23 '21 09:06 pattacini

To resume the issue, I guess that the basic point is that users may have many different ways of installing librealsense, so the easy thing is just to discuss one way in the docs, and avoid to build librealsense in the superbuild itself.

traversaro avatar Jan 19 '24 10:01 traversaro