depthai
depthai copied to clipboard
[BUG] TypeError: send(): incompatible function arguments
(UPDATE): Please look at the second entry below. This error was fixed by wrapping my message with a default NN message type.
Hello :) First of all, I apologize if this is repeated. I also understand this is probably not a bug, but I am not sure.
So, I trained my own .blob NN, which takes an Array as input. I created the following script to send an array to the device:
# Create Pipeline
p = dai.Pipeline()
p.setOpenVINOVersion(dai.OpenVINO.VERSION_2021_4)
# Load my own NN
nn = p.createNeuralNetwork()
nn.setBlobPath(str(Path("resources/nn/carREID/Car.blob").resolve().absolute()))
nn.setNumInferenceThreads(2)
nn.input.setBlocking(True)
# Define XLink
xinArray = p.createXLinkIn()
nnOut = p.createXLinkOut()
xinArray.setStreamName("inArray")
nnOut.setStreamName("nn")
# Link everything
xinArray.out.link(nn.input)
nn.out.link(nnOut.input)
# Initialize everything
with dai.Device(p) as device:
# Input queue to send arrays to the device.
qIn = device.getInputQueue(name="inArray")
# Output queue will be used to get nn data from the arrays.
qDet = device.getOutputQueue(name="nn", maxSize=4, blocking=False)
# Start!
while True:
# Create random array
x = torch.randn(1,3,64,128).numpy()
# Send array to device and obtain result
qIn.send(x)
inDet = qDet.tryGet()
However, If I run such script, I get the following error:
TypeError: send(): incompatible function arguments. The following argument types are supported:
1. (self: depthai.DataInputQueue, msg: depthai.ADatatype) -> None
2. (self: depthai.DataInputQueue, rawMsg: depthai.RawBuffer) -> None
I understand that I need to transform my array, but I don't understand how. Thank you for your help :) :)
Hello again,
I have used the NN message module to wrap my message. However, I get a new error I don't understand. Here you can see my updated code:
from pathlib import Path
import numpy as np
import cv2
import depthai as dai
import time
import torch
p = dai.Pipeline()
p.setOpenVINOVersion(dai.OpenVINO.VERSION_2021_4)
nn = p.createNeuralNetwork()
nn.setBlobPath(str(Path("resources/nn/carREID/Car.blob").resolve().absolute()))
nn.setNumInferenceThreads(2)
nn.input.setBlocking(True)
xinArray = p.createXLinkIn()
nnOut = p.createXLinkOut()
xinArray.setStreamName("inArray")
nnOut.setStreamName("nn")
# Linking
xinArray.out.link(nn.input)
nn.out.link(nnOut.input)
with dai.Device(p) as device:
# Input queue to send arrays to the device.
qIn = device.getInputQueue(name="inArray")
# Output queue will be used to get nn data from the arrays.
qDet = device.getOutputQueue(name="nn", maxSize=84, blocking=False)
counter = 0
while True:
x = torch.randn(1,3,64,128)
# Now I convert my message here
msg = dai.NNData()
msg.setData(x)
# send to NN to obtain result
qIn.send(msg)
inDet = qDet.tryGet()
if inDet is not None:
print('inDet: ', inDet)`
However, when running the above I get the following error:
[NeuralNetwork(0)] [error] Input tensor 'input' (0) exceeds available data range. Data size (0B), tensor offset (0), size (24576B) - skipping inference
Does anyone know how to fix it? I think it has something to do with the input data flow? But I can't find anything online. Please help
Hi @TomasMendozaHN, could you change your x variable to be a planar list (so 1 dimension, plain python list object) and try again? I don't think NNData.setData would accept the torch.Tensor object that is returned from randn