depthai
depthai copied to clipboard
Introduce Sync Mode/Node
Start with the why:
One the sequence number offset is fixed (https://github.com/luxonis/depthai/issues/211), synchronizing frame on the host should not be hard, but will still require the code to match the frames based on these sequence numbers.
What could be very useful is to have a data structure where all the data in this structure is synchronized so the end user does not need to search for matches on the host, but rather just pull what is required/of-interest out of this structure under the understanding that all of this data is time-synchronized to the same physical time to the best of the ability of the DepthAI device.
Move to the how:
This is in the air currently. What would seem useful is if all the frames (metadata) corresponding to a single measurement were packed up together in a data structure together according to that time and then accessible easily to the user.
Move to the what:
Support a sync mode where all frames/metadata of the same physical time (or rather what DepthAI can actually implement) reside in the same data structure for easy synced access.
It should be the issue https://github.com/luxonis/depthai/issues/211 to be eventually closed by the PR https://github.com/luxonis/depthai/pull/240. For the issue here, probably it will be easier to implement in Gen2.
Ah yes, thanks for the correction. Thanks! And agreed on doing this issue in Gen2.
For context, would this be a filter/match on the device side? that would seems to be most efficient.
Yes that's right @eric-schleicher . Sorry I missed this.
Relevant underlying updates that will be leveraged to make this functionality:
- https://github.com/luxonis/depthai-experiments/pull/159
- https://github.com/luxonis/depthai-python/pull/310
- https://github.com/luxonis/depthai-core/pull/174
Hi,
Sorry if I'm off topic, but I'm a little lost with the different issues (#211) and PRs (#240) related to sequence numbers.
I'm using depthai-core C++ sdk v2.15.5 with an OAK-D, and I'm encountering sequence number offsets (usually between 5 and 10) between color and depth frames, with all queue sizes set to 1 and behavior set to non blocking for frame retrieval.
Is this behaviour supposed to be fixed with the latest sdk releases? or am I supposed to implement a synchronizer similar to this python one but in C++?
Thanks for your help
@blackccpie v2.15.5 is supposed to have the sequence number sync fixed, but nevertheless you could try also the latest at the moment v2.17.3.
However, if the pipeline has some slower components on device side (e.g NN with a model that can't run at the camera configured FPS), frame queuing could happen on device side. Same can happen if USB/Ethernet operates at a lower speed that can't sustain the required bandwidth.
If you could provide an MRE (https://docs.luxonis.com/en/latest/pages/support/?creating-minimal-reproducible-example), we can have a look at it.
Hi @alex-luxonis,
Sorry for the late reply, I made some tests with the latest sdk v2.7.4.
I think, as you stated before, that queuing do happens on the device side. In fact I was configuring my color and depth streams at 60fps, but my application retrieval rate reached a 45fps max. If I reduce both framerates to 30fps, than I have no more sync issues. Still, I don't really known if its a pipeline or USB bandwith issue.
Regards,
Albert
PS: here is a MRE to reproduce the out of sync issue:
#include <iostream>
// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"
// Closer-in minimum depth, disparity range is doubled (from 95 to 190):
static std::atomic<bool> extended_disparity{false};
// Better accuracy for longer distance, fractional disparity 32-levels:
static std::atomic<bool> subpixel{true};
// Better handling for occlusions:
static std::atomic<bool> lr_check{true};
static int global_fps = 60;
int main() {
// Create pipeline
dai::Pipeline pipeline;
// Define sources and outputs
auto rgb = pipeline.create<dai::node::ColorCamera>();
auto monoLeft = pipeline.create<dai::node::MonoCamera>();
auto monoRight = pipeline.create<dai::node::MonoCamera>();
auto depth = pipeline.create<dai::node::StereoDepth>();
auto cout = pipeline.create<dai::node::XLinkOut>();
auto xout = pipeline.create<dai::node::XLinkOut>();
cout->setStreamName( "rgb" );
xout->setStreamName("depth");
// RGB properties
rgb->setBoardSocket( dai::CameraBoardSocket::RGB );
rgb->setResolution( dai::ColorCameraProperties::SensorResolution::THE_1080_P );
rgb->setFps( global_fps );
rgb->setPreviewSize( 960, 540 );
rgb->setIspScale( 1, 2 );
// Mono properties
monoLeft->setResolution(dai::MonoCameraProperties::SensorResolution::THE_400_P);
monoLeft->setBoardSocket(dai::CameraBoardSocket::LEFT);
monoLeft->setFps( global_fps );
monoRight->setResolution(dai::MonoCameraProperties::SensorResolution::THE_400_P);
monoRight->setBoardSocket(dai::CameraBoardSocket::RIGHT);
monoRight->setFps( global_fps );
// Create a node that will produce the depth map
depth->setDefaultProfilePreset(dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
depth->setLeftRightCheck(lr_check);
depth->setExtendedDisparity(extended_disparity);
depth->setSubpixel(subpixel);
depth->setDepthAlign( dai::CameraBoardSocket::RGB );
// Linking
rgb->preview.link( cout->input );
monoLeft->out.link(depth->left);
monoRight->out.link(depth->right);
depth->depth.link(xout->input);
// Connect to device and start pipeline
dai::Device device(pipeline);
// Output queue will be used to get the disparity frames from the outputs defined above
auto qc = device.getOutputQueue( "rgb", 1, false );
auto qx = device.getOutputQueue("depth", 1, false);
while(true) {
auto inRgb = qc->get<dai::ImgFrame>();
auto inDepth = qx->get<dai::ImgFrame>();
auto framec = inRgb->getCvFrame();
auto framex = inDepth->getFrame();
cv::normalize( framex, framex, 0, 255, cv::NORM_MINMAX, CV_8UC1 );
cv::applyColorMap(framex, framex, cv::COLORMAP_JET);
cv::imshow("rgb", framec);
cv::imshow("depth_color", framex);
int key = cv::waitKey(1);
if(key == 'q' || key == 'Q') {
return 0;
}
}
return 0;
}
I was also able to reproduce the stated issue using the above code, and verified that the sequence numbers were offset by 5-10 by including some print statements of the depth and rgb ImgFrame objects' sequence numbers.