depthai
depthai copied to clipboard
[Feature-Request] Multi-cam calibration and multi-pointcloud alignment
Start with the why
:
For folks that would want to use our cameras for 3D object scanning, using multiple cameras would be crucial.
Move to the what
:
Create a script that will let you calibrate multiple cameras looking at the same scene. This will provide extrinsics of cameras relative to each other, which would allow aligning multiple point clouds (produced from different cameras). We could then do some additional filtering of this combined pointcloud.
Move to the how
:
1. Calibrating
- BioSense have already done this (YT video) - no source code
- Bart (VR Laser tag at home), video for calibration here.
2. Alignment
- Use something like Meshlab to align multiple pointclouds.
- Open3d has 3d reconstruction function as well, video here.
- We could also look into getting our cameras supported by EF-EVE, app that allows Volumetric capturing with multiple depth cameras.
Azure Kinect also has it. It only works up to 6 meters. If Luxonis gives longer distance, he will win.
Sweet! Yes, this would be great!
This would be a very useful feature. Between the IMU data of each camera, or the ability to define the relative orientations of each camera the required information is already there, but an example of the appropriate way to do this would be very useful.
plus one!
This would be a great feature for us as well.
Progress: Multi-cam calibration & spatial detection fusion
Initial demo here: https://github.com/luxonis/depthai-experiments/tree/master/gen2-multiple-devices/rgbd-pointcloud-fusion cc @MarekKot1 @LyceanEM @mhdtahawi @andrewwetzel23
From: https://discuss.luxonis.com/d/1746-multi-camera-calibration-oak-d-pro-w-faulty-translation-matrix
When I try to run the Multi-cam calibration script, it keeps getting stuck in the capturing still image loop. There is nothing moving in the camera frame, is there anything that could be going wrong?
CC: @MaticTonin