depthai
depthai copied to clipboard
[BUG] Network Passthrough & Hanging
Hi @dexter-8, could you convert the v-test.cpp to python script?
I'm not that good at python, but I could try to provide a full visual studio project to build/run it if that helps.
I am also experiencing the same (or a very similar) issue in python. network runs fine but if i try to link the network's passthrough and visualize it, it freezes. python 3.10 depthai.version '2.16.0.0' Ubuntu 22.04
Possibly related, but if i log the timestamp or sequence number of the network's output, it is stuck at zero (but i am clearly getting new outputs from the network)
@jjd9 please provide full MRE, thanks.
Thanks for the fast reply @Erol444 while setting up the MRE, I actually figured out what my problem was :facepalm: I will attach anyway for posterity. Here is the MRE. This innocuous looking code will freeze the pipeline. (although you wont be able to run it b/c github wont let be share the blob...)
import cv2
import depthai as dai
pipeline = dai.Pipeline()
pipeline.setOpenVINOVersion(dai.OpenVINO.VERSION_2022_1)
# Define sources and outputs
monoLeft = pipeline.create(dai.node.MonoCamera)
# Properties
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_480_P)
monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
# NN to do something cool!
nn = pipeline.create(dai.node.NeuralNetwork)
nn.setBlobPath("test_simplified.blob")
# Resize to NN shape
NN_SHAPE = 240, 320
manipLeft = pipeline.create(dai.node.ImageManip)
manipLeft.initialConfig.setResize(NN_SHAPE[1], NN_SHAPE[0])
manipLeft.initialConfig.setFrameType(dai.RawImgFrame.Type.GRAY8)
monoLeft.out.link(manipLeft.inputImage)
manipLeft.out.link(nn.inputs['image'])
nn.inputs['image'].setWaitForMessage(False)
# Send network output and frames to host
nn_xout = pipeline.create(dai.node.XLinkOut)
nn_xout.setStreamName("nn")
nn.out.link(nn_xout.input)
left_xout = pipeline.create(dai.node.XLinkOut)
left_xout.setStreamName("left")
nn.passthrough.link(left_xout.input) # FREEZES pipeline
# Pipeline is defined, now we can connect to the device
with dai.Device(pipeline) as device:
qNn = device.getOutputQueue(name="nn", maxSize=4, blocking=False)
qLeft = device.getOutputQueue(name="left", maxSize=4, blocking=False)
while True:
left_get = qLeft.get()
nn_output = qNn.get()
# logging timestamp. you should see the Frame time stamp increases normally, whereas NN timestamp is stuck at 0
print(f"Frame: {left_get.getTimestamp()}, NN: {nn_output.getTimestamp()}")
This is the problem:
manipLeft.out.link(nn.inputs['image'])
nn.inputs['image'].setWaitForMessage(False)
this syntax results in the network working but the pipeline freezing if i try to get the output.
but if I change it to
manipLeft.out.link(nn.input)
nn.input.setWaitForMessage(False)
the passthrough and the timestamps work properly. which I guess makes sense because nn.inputs["image"] probably isnt connected to the passthrough in the way nn.input is. so nothing ever actually triggers a message callback on the passthrough link.
Just curious why it was closed since the original problem hasn't been solved?
@dexter-8 Because 3 months have passed without any response. Feel free to respond with MRE and reopen an issue, or create a new issue.
The first entry from January has everything in the zip file.
How do I reopen this ticket?
Here you go. As mentioned, I am not familiar with C++.
So there is no one that can debug issues supplied in C++?
I think your problem is that you are not passing an imgFrame message to your network from the host. You are passing your image data as a dai::NNData msg. Which is working in the sense that your network runs. But it may be causing The odd behavior you are experiencing. And this does not happen when you use the camera preview as your network input, because that is in fact passing an image frame message into your network Have you tried instead, passing in your input as an imgFrame message? Like in this example: https://docs.luxonis.com/projects/api/en/latest/samples/MobileNet/video_mobilenet/#video-mobilenetssd
This whole issue only surfaced because I was forced to feed the network from the host since the camera preview wasn't getting the correct shape when fed to the network (issue 908). If I change the message to type ImgFrame then I can't use setLayer to provide the data. Is there another way to get around this? (other than using old zoo models)