depthai-core
depthai-core copied to clipboard
[Feature Request] On device NNData (FP16) to ImgFrame (UINT8) conversion
I am trying to link the output of a NeuralNetwork with a DetectionNetwork without reuploading the frame from host. My NN is trained with red channel split grayscale images and I had to use a kornia script to convert the preview frames, as ImgManip does not support it yet. However, the detection network is not able to find the object if I link those nodes directly and I am unable to tell if the NN is receiving valid images. To display the passthrough I have to do the reshape -> transpose-> to uint8 conversion on host.
After that, I inserted a RGB -> BGR ->RGB kornia .blob and linked with a script that takes the NNData and converts it to a RGB888p ImgFrame. Then, it runs the face-detection nn. However, the passthrough outputs a broken image like shown below.
It would be great if you could implement a way to do the NNData (FP16) to ImgFrame (UINT8) on the device.
Do you think that this feature is feasible or do we have to wait for the 3rd series of OAK devices?
Its a tad on a backlog for now, but we might address once some time frees up.
If this is an important business feature for you, feel free to reach out to us, to speed such implementation along.
Yes, I already wrote to the support email and Erik suggested that I should post it here. The reason is that we are considering scaling a project already in production and we wanted some of the processing to be done in-camera without having to send every frame back and forth. If that is not possible, we would use another solution for future iterations.
@arufl
Sorry for the delay - performing just the FP16 to U8 is possible by specifying U8 as type of output when compiling the network. In that case the actual conversion will be done by the network, such that the output tensor will be U8 type.
@themarpe I tried with blobconverter setting compile_params=["-op U8"] but I got an "U8 is not a valid precision for output layer" error.
CC: @tersekmatija is this a known limitation?
We haven't had a case where that would be required, but yes, it seems that MX does not support FP->U8 conversion. Models also fails to compile if conversion is manually inserted in PyTorch by specifying Tensor.to(torch.uint8)
. However, you can output I32, so if we can add some conversion for I32 <-> I8 in ImageManip that could work as well. @themarpe
Only U8 & FP16/U16 are targeted.
We'll have to create a conversion node