direct_visual_lidar_calibration
direct_visual_lidar_calibration copied to clipboard
Realsense support
Hi Kenji,
I am having an hard time with the format of pointclod2 of Realsense since it has XYZRGB format bu the project looks for XYZI format.
Do you have an idea how to convert this and make it work for in the project? Any alternatives are welcome if you encounter this problem and overcome with a particular sensor. I saw in your opening page there is a Azure depth cam.
It would be very very helpful to have converter at least so that I can convert my realsense rosbag2 to convert in terms of XYZI format for the Realsense recordings
Best
I am experiencing a similar issue.
Since Azure Kinect (Depth camera) is listed as a supported model, it seems like it should be possible to calibrate the RGB and point cloud of an RGBD camera.
However, as RGBD cameras typically do not provide intensity, it feels like there is insufficient data for calibration.
Could it be that the reference to Azure Kinect in the paper is actually referring to its RGB camera, and the calibration is between the LiDAR and the RGB of Azure Kinect?
You can use infrared information instead; infrared images share the same optical frame as that of depth images. The NID metric should work as long as there is mutual information between images (at least in theory).