[ImageManip(7)] [error] Invalid configuration or input image - skipping frame
Hi, on my Blazepose repository, I am investigating the following issue: geaxgx/depthai_blazepose#29
In summary, we get the message [ImageManip(7)] [error] Invalid configuration or input image - skipping frame under certain conditions that are still not perfectly clear. It happens from time to time when the script node sends an ImageManip config to the ImageManip node (7) (the blue arrow in the graph below):
The ImageManip configuration is a dynamically calculated setCropRotatedRect operation. The RotatedRect is defined with normalized values (normalizedCoords=True). My first hypothesis was that the config sent was invalid, but if I try to apply the same exact RotatedRect in a very basic example (a pipeline with a ColorCamera and a ImageManip), I don't get any error.
Actually, I wonder if there are possible invalid values in a setCropRotatedRect operation. I have tried with even absurd values without getting any error. I am aware that the output images may be too big, but in that case the error message is different ([error] Output image is bigger than maximum frame size specified in properties).
Knowing the exact conditions under which the message [error] Invalid configuration or input image happens would help me to investigate. Can you help me on this ?
Hi @geaxgx ,
I would first check the depthai version (on your computer, as well as on @shamus333 who opened the Issue), as we have recently added a bunch of ImageManip fixes, so it doesn't throw an error in cases as he has mentioned (ROI for ImageManipConfig our of bounds; ERROR :-0.3662109375...). And even if this does happened, I would suggest adding support for such edge cases, so it doesn't freeze/crash, as ImageManip won't actually crash in such cases, it just won't output any ImgFrame (if pipeline/host side expects an ImgFrame, eg. blocking .get(), that could be the problem). Thoughts?
Thanks, Erik
Hi @Erol444 , I am currently using depthai 2.17.3
It is true that some of the values that defines the ROI are weird in the context of blazepose (they are generated from the output of the pose detection model and I have to investigated on this side too). But these same "weird" values works well in a simpler context (pipeline with a ColorCamera and a ImageManip), so I am wondering if the problem does not come from something else. A more precise message than [error] Invalid configuration or input image would help.
I agree with you, checking the values of the ROI with thresholds make sense to avoid edge cases but determining these thresholds is not straightforward (for instance for blazepose, a ROI center outside of the image is possible).
@Erol444 Please have a look at the following MRE:
#!/usr/bin/env python3
import cv2
import depthai as dai
# Create pipeline
pipeline = dai.Pipeline()
camRgb = pipeline.create(dai.node.ColorCamera)
# camRgb.setPreviewSize(640, 480) # Works !
camRgb.setPreviewSize(1152, 648) # -> [ImageManip(1)] [error] Invalid configuration or input image - skipping frame
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setInterleaved(False)
manipRgb = pipeline.create(dai.node.ImageManip)
manipRgb.setMaxOutputFrameSize(256*256*3)
manipRgb.initialConfig.setResize(256, 256)
rgbRr = dai.RotatedRect()
rgbRr.center.x, rgbRr.center.y = 0.4619140625, 4.06944465637207
rgbRr.size.width, rgbRr.size.height = 6.519899845123291, 11.590932846069336
rgbRr.angle = 1.952752709388733
manipRgb.initialConfig.setCropRotatedRect(rgbRr, True)
camRgb.preview.link(manipRgb.inputImage)
manipRgbOut = pipeline.create(dai.node.XLinkOut)
manipRgbOut.setStreamName("manip_rgb")
manipRgb.out.link(manipRgbOut.input)
with dai.Device(pipeline) as device:
qRgb = device.getOutputQueue(name="manip_rgb", maxSize=8, blocking=False)
while True:
inRgb = qRgb.get()
cv2.imshow('Color', inRgb.getCvFrame())
if cv2.waitKey(1) == 27:
break
Note the impact of the preview size:
camRgb.setPreviewSize(640, 480) # Works !
camRgb.setPreviewSize(1152, 648) # -> [ImageManip(1)] [error] Invalid configuration or input image - skipping frame
Also I wrote yesterday that some of the values that defines the ROI are weird in the context of blazepose. Actually, I was wrong. After further investigation, it turns out they all make sense. If you look for instance at the size of the rotated rectangle rgbRr.size.width, rgbRr.size.height = 6.519899845123291, 11.590932846069336, it seems huge, but it happens when the camera is very close to the face and the face fills a big part of the image. In that case, the zone supposed to contain the whole body (=the expected output of the ImageManip) is much bigger than the source image.
I hope the MRE will help you to find out the what the problem is. Thank you !
Thanks @geaxgx for the MRE and sorry for the delay.
@moratom do you mind checking this one out?
I've run the new firmware with the improved logging and the MRE reports the WARP_SWCH_ERR_CACHE_TO_SMALL error here @themarpe.
How do I have to interpret the WARP_SWCH_ERR_CACHE_TO_SMALL error. Does it mean there is some kind of limit on the memory available to the ImageManip node for doing its processing ? I guess it could make sense. If so, is there an easy way for the script node to know before it sends the config to the ImageManip node that the config will generate this error ?
Yes, it is a limit on the available memory for the ImageManip node. Unfortunately there is no easy way to know when the ImageManip will run out of cache, but we are working on the issue.
The best workaround at the moment is to use a two staged ImageManip, so each ImageManip node is doing a smaller resize (see below).
This would be rather cumbersome to implement for your (dynamic) case, but it's probably the best option we have for now.
#!/usr/bin/env python3
"""
The code is edited from docs (https://docs.luxonis.com/projects/api/en/latest/samples/Yolo/tiny_yolo/)
We add parsing from JSON files that contain configuration
"""
from pathlib import Path
import depthai as dai
# Create pipeline
#-----------------------------------------------------------------------------
pipeline = dai.Pipeline()
# Create nodes
#-----------------------------------------------------------------------------
xoutRgbSmall = pipeline.create(dai.node.XLinkOut)
colorCameraNode = pipeline.create(dai.node.ColorCamera)
shutterScript = pipeline.create(dai.node.Script)
yoloInputResizeNodeStageOne = pipeline.create(dai.node.ImageManip)
yoloInputResizeNodeStageTwo = pipeline.create(dai.node.ImageManip)
#XOUT
#-----------------------------------------------------------------------------
xoutRgbSmall.setStreamName("rgb_small")
xoutRgbSmall.input.setBlocking(False)
xoutRgbSmall.input.setQueueSize(1)
# Properties
#-----------------------------------------------------------------------------
largeWidth = 1920
largeHeight = 1080
smallWidth = 320
smallHeight = 320
fps = 10
colorCameraNode.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
colorCameraNode.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
colorCameraNode.setVideoSize(largeWidth, largeHeight)
colorCameraNode.setPreviewSize(smallWidth, smallHeight)
colorCameraNode.setInterleaved(False)
colorCameraNode.setFps(fps)
yoloInputResizeNodeStageOne.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
yoloInputResizeNodeStageOne.initialConfig.setKeepAspectRatio(False)
yoloInputResizeNodeStageOne.initialConfig.setResize(500, 500)
yoloInputResizeNodeStageTwo.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
yoloInputResizeNodeStageTwo.initialConfig.setKeepAspectRatio(False)
yoloInputResizeNodeStageTwo.initialConfig.setResize(320, 320)
shutterScript.setScript("""
import time
ctrl = CameraControl()
ctrl.setCaptureStill(True)
time.sleep(2)
node.warn("Sending still command")
node.io['out'].send(ctrl)
""")
shutterScript.outputs['out'].link(colorCameraNode.inputControl)
colorCameraNode.still.link(yoloInputResizeNodeStageOne.inputImage)
yoloInputResizeNodeStageOne.out.link(yoloInputResizeNodeStageTwo.inputImage)
yoloInputResizeNodeStageTwo.out.link(xoutRgbSmall.input)
with dai.Device(pipeline) as device:
# Output queues will be used to get the rgb frames and nn data from the outputs defined above
qRgbSmall = device.getOutputQueue(name="rgb_small", maxSize=4, blocking=False)
inRgbSmall = qRgbSmall.get()
Thanks for the reply. I am not sure how to make a two staged ImageManip from an ImageManip doing a setCropRotatedRect(). Also, does this workaround guarantee that there will be no error anymore ? Or is just reducing the probability to get get the error ?
In theory, you could first crop a larger rectangle and then crop the final one. And yes, this would only lower the probability of getting the error, not prevent it completely.
It turns out, we have been able to enlarge the cache available to the ImageManip node in branch multi_cam_support. I've tested your MRE and the issue doesn't appear anymore.
The depthai that contains the fix can be installed by running:
pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ depthai==2.17.3.1.dev0+b29822e30d782deb9ae8100817b34aea67fb1257
Please test it and report if everything works. In case the issue persist, it is possible to enlarge the cache manually with pipeline.setImageManipCmxSizeAdjust(+32*1024) (32*1024 is just an example value)
Thank you for creating the MRE and the patience with waiting for the fix @geaxgx .
Thanks @moratom.
The fix seems to do the job: when using pipeline.setImageManipCmxSizeAdjust(160*1024), I wasn't able to reproduce the error. With lower values like 128*1024, I still easily get the error.
Do I consider the fix as permanent or temporary ?
I'm glad it helped @geaxgx. This is more of a temporary fix, as we would ideally use allocate all remaining memory after other nodes allocate their needs, to the ImageManip node, but there is no ETA on that yet.
@geaxgx we're in the process of creating a PoC for it (targeting v2.18.0), where the additional CMX memory will be allocated automatically.
Be on the look out for DepthAI v2.18.0 release by the end of the week (or develop beforehand) :)
@geaxgx
image_manip_dynamic_cmx_allocation branch for an early test :)
Sorry for the late reply @themarpe
image_manip_dynamic_cmx_allocation branch looks good to me as I wasn't able to reproduce the problem. Thanks !
Thanks @geaxgx - merged to develop :)
@themarpe We have tested depthai_blazepose with the new version of depthai (2.18.0) and unfortunately the error happens again. I am a bit surprised because one month ago I wasn't able to get the error with the image_manip_dynamic_cmx_allocation branch. Maybe my tests at that time were not thorough enough ?
For sure 2.18.0 brings some improvement since the MRE above is working now with camRgb.setPreviewSize(1152, 648).
But I get an error now with camRgb.setPreviewSize(1792, 1008)
The error label is as expected: [error] Not possible to create warp params. Error: WARP_SWCH_ERR_CACHE_TO_SMALL
On depthai_blazepose, the problem is more annoying because sometimes the error is different:
[system] [critical] Fatal error. Please report to developers. Log: 'ResourceLocker' '358'
and the app freezes.
To easily reproduce one of these 2 errors (unfortunately, I don't know how to select between these 2), clone https://github.com/geaxgx/depthai_blazepose , run: ./demo.py -e --internal_frame_height 1008 and brings your face very close to the camera so that the face fills the image.
Do you have any thoughts ? What means the 'ResourceLocker' '358' error ?
Hi @geaxgx - thanks for the report. We are in the process of combining a couple of fixes for a new release. WRT resource locker issue, do you mind giving latest develop a try? Otherwise, for ImageManip, we'll take a closer look in the following week.
Thx @themarpe
I have just tried 2.18.0.0.dev0+b19dee00dd9a1395037ff1ec4ccd2714a12f6ba9 and have reproduced several times the 'ResourceLocker' '358' error.