depthai
depthai copied to clipboard
[BUG] Color camera out of synch
Describe the bug When synchronizing the 3 cameras using the sequence number, the color camera is out of sync. Specifically, left and right cameras are always synchronized, but the color camera is ahead when starting the sensor, and after few minutes it is behind (and the delay keeps increasing).
To Reproduce I used the code provided here and I modified it by adding the color camera.
#!/usr/bin/env python3
import cv2
import depthai as dai
import numpy as np
# Start defining a pipeline
pipeline = dai.Pipeline()
# Define a source - two mono (grayscale) cameras
cam_left = pipeline.createMonoCamera()
cam_left.setBoardSocket(dai.CameraBoardSocket.LEFT)
cam_left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
cam_right = pipeline.createMonoCamera()
cam_right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
cam_right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
cam_rgb = pipeline.createColorCamera()
cam_rgb.setPreviewSize(1280, 720)
cam_rgb.setBoardSocket(dai.CameraBoardSocket.RGB)
cam_rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
cam_rgb.setInterleaved(False)
cam_rgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
# Create outputs
xout_left = pipeline.createXLinkOut()
xout_left.setStreamName('left')
cam_left.out.link(xout_left.input)
xout_right = pipeline.createXLinkOut()
xout_right.setStreamName('right')
cam_right.out.link(xout_right.input)
xout_rgb = pipeline.createXLinkOut()
xout_rgb.setStreamName("rgb")
cam_rgb.preview.link(xout_rgb.input)
def seq(packet):
return packet.getSequenceNum()
# https://stackoverflow.com/a/10995203/5494277
def has_keys(obj, keys):
return all(stream in obj for stream in keys)
class PairingSystem:
allowed_instances = [0, 1, 2] # Center (0) & Left (1) & Right (2)
def __init__(self):
self.seq_packets = {}
self.last_paired_seq = None
def add_packet(self, packet):
if packet is not None and packet.getInstanceNum() in self.allowed_instances:
seq_key = seq(packet)
self.seq_packets[seq_key] = {
**self.seq_packets.get(seq_key, {}),
packet.getInstanceNum(): packet
}
def get_pairs(self):
results = []
for key in list(self.seq_packets.keys()):
if has_keys(self.seq_packets[key], self.allowed_instances):
results.append(self.seq_packets[key])
self.last_paired_seq = key
if len(results) > 0:
self.collect_garbage()
return results
def collect_garbage(self):
for key in list(self.seq_packets.keys()):
if key <= self.last_paired_seq:
del self.seq_packets[key]
# Pipeline defined, now the device is assigned and pipeline is started
with dai.Device(pipeline) as device:
device.startPipeline()
idx = 0
# Output queue will be used to get the rgb frames from the output defined above
q_left = device.getOutputQueue(name="left", maxSize=4, blocking=False)
q_right = device.getOutputQueue(name="right", maxSize=4, blocking=False)
q_rgb = device.getOutputQueue(name="rgb", maxSize=4, blocking=False)
ps = PairingSystem()
while True:
# instead of get (blocking) used tryGet (nonblocking) which will return the available data or None otherwise
ps.add_packet(q_left.tryGet())
ps.add_packet(q_right.tryGet())
ps.add_packet(q_rgb.tryGet())
for synced in ps.get_pairs():
raw_left = synced[1]
raw_right = synced[2]
raw_rgb = synced[0]
frame_left = raw_left.getCvFrame()
frame_left = cv2.cvtColor(frame_left, cv2.COLOR_GRAY2BGR)
frame_right = raw_right.getCvFrame()
frame_right = cv2.cvtColor(frame_right, cv2.COLOR_GRAY2BGR)
frame_rgb = raw_rgb.getCvFrame()
frame_final = np.concatenate([frame_left, frame_rgb, frame_right], axis=1)
h, w = frame_final.shape[:2]
frame_final = cv2.resize(frame_final, (w // 2, h // 2))
cv2.imshow('final', frame_final)
c = cv2.waitKey(1)
if c == ord('s'):
cv2.imwrite(f'{idx:05}_image.png', frame_final)
idx += 1
if c == ord('q'):
break
Expected behavior The three cameras should be synchronized and the delay between camera (if there is) shouldn't vary in time.
Screenshots I saved a frame of the phone timer every minute to approximately measure the delay (left image is left camera, center image is color camera, right image is right camera).

This behavior is currently expected: the cameras are started in sequence (RGB starting first), and only Left and Right are synchronized. The sequence numbers are incremented independently per each camera. About the drift over time, even though the FPS is configured as 30 for all cameras by default, there's a small frame-time difference between IMX378 and OV9282 due to different clock settings / sensor config. It's about 1 frame every 80 seconds.
We'll backport soon the Gen1 sync-related changes from these PRs to fix the issue: https://github.com/luxonis/depthai/pull/157 https://github.com/luxonis/depthai/pull/240
For now, the frame timestamps can be used for the sync purpose.
Thank you for the reply, I will try to use the timestamp.
@domef Should be fixed now, see here: https://github.com/luxonis/depthai-experiments/pull/159 https://github.com/luxonis/depthai-python/pull/310