depthai
depthai copied to clipboard
[Feature-Request] Spatial-detection-fusion for non-overlayed devices
Start with the why:
For many use cases it is necessary to use multiple or dozens of depthai devices to cover a large area. In order to have a coherent understanding of the covered scene it is necessary to know the relative position of these sensors so the data can be fused together. Currently the only way to do this is to take stills from the devices, take a video of the area to "link" all the devices through shared seen features, then export this to colmap in order to reconstruct the scene, and then find and export the relative positions. This is difficult and prone to failure.
Move to the what:
A simple program similar to the calibration app that is able to find the relative positions of luxonis sensors and export this as extrinsics data with a common origin.
Move to the how:
Extend the spatial detection fusion to not apply to only sensors with a single overlapping image But many overlapping images that can "link" disparate sensors.
The program would know what sensors are currently available, and then explain which are "linked" through an overlapping image and which still need to be "linked" to the rest of the group.
Every "link" would be a group and the goal on the user is to make sure there is enough overlap to link all disparate groups into a global group.
This would be similar to merging graph structures into a global graph.
Link to discussion in forums: https://discuss.luxonis.com/d/1491-has-any-work-been-done-on-spatial-detection-fusion-for-non-overlayed-devices/5