depthai-core icon indicating copy to clipboard operation
depthai-core copied to clipboard

[Feature Request] On device NNData (FP16) to ImgFrame (UINT8) conversion

Open arufl opened this issue 2 years ago • 3 comments

I am trying to link the output of a NeuralNetwork with a DetectionNetwork without reuploading the frame from host. My NN is trained with red channel split grayscale images and I had to use a kornia script to convert the preview frames, as ImgManip does not support it yet. However, the detection network is not able to find the object if I link those nodes directly and I am unable to tell if the NN is receiving valid images. To display the passthrough I have to do the reshape -> transpose-> to uint8 conversion on host.

After that, I inserted a RGB -> BGR ->RGB kornia .blob and linked with a script that takes the NNData and converts it to a RGB888p ImgFrame. Then, it runs the face-detection nn. However, the passthrough outputs a broken image like shown below.

kornia

It would be great if you could implement a way to do the NNData (FP16) to ImgFrame (UINT8) on the device.

arufl avatar Aug 29 '22 07:08 arufl

Do you think that this feature is feasible or do we have to wait for the 3rd series of OAK devices?

arufl avatar Oct 14 '22 10:10 arufl

Its a tad on a backlog for now, but we might address once some time frees up.

If this is an important business feature for you, feel free to reach out to us, to speed such implementation along.

themarpe avatar Oct 14 '22 15:10 themarpe

Yes, I already wrote to the support email and Erik suggested that I should post it here. The reason is that we are considering scaling a project already in production and we wanted some of the processing to be done in-camera without having to send every frame back and forth. If that is not possible, we would use another solution for future iterations.

arufl avatar Oct 14 '22 20:10 arufl

@arufl

Sorry for the delay - performing just the FP16 to U8 is possible by specifying U8 as type of output when compiling the network. In that case the actual conversion will be done by the network, such that the output tensor will be U8 type.

themarpe avatar Nov 08 '22 01:11 themarpe

@themarpe I tried with blobconverter setting compile_params=["-op U8"] but I got an "U8 is not a valid precision for output layer" error.

arufl avatar Nov 08 '22 07:11 arufl

CC: @tersekmatija is this a known limitation?

themarpe avatar Nov 08 '22 15:11 themarpe

We haven't had a case where that would be required, but yes, it seems that MX does not support FP->U8 conversion. Models also fails to compile if conversion is manually inserted in PyTorch by specifying Tensor.to(torch.uint8). However, you can output I32, so if we can add some conversion for I32 <-> I8 in ImageManip that could work as well. @themarpe

tersekmatija avatar Nov 08 '22 17:11 tersekmatija

Only U8 & FP16/U16 are targeted.

We'll have to create a conversion node

themarpe avatar Nov 21 '22 13:11 themarpe