acap-computer-vision-sdk-examples icon indicating copy to clipboard operation
acap-computer-vision-sdk-examples copied to clipboard

How to process the output of a yolov5n model

Open HardcoreBudget opened this issue 1 year ago • 3 comments

My axis camera is AXIS P1467-LE Bullet Camera, and I was able to successfully upload the yolov5n tflite model to it. I want to use that yolov5n tflite model, but I am struggling with post-processing the output. The exported tflite model output has only 1 element instead of the usual 4 (which can be found in SSD mobilenetv4 model. These 4 stand for bounding boxes, classes, scores for each class, and overall confidence). How can I extract the things mentioned before from the output of the inference and display the bounding boxes correctly on my image? I already tried exporting the yolov5n with nms to get the 4 outputs but then the model becomes unsupported by the camera as it can't allocate the tensors.

I have tried implementing the solution provided here: https://stackoverflow.com/questions/65824714/process-output-data-from-yolov5-tflite with the Inference client instead of the tflite interpreter but with no luck as the output image still didn't have the bounding boxes. Can you please provide me with an example code on how to actually use the yolov5n tflite model on an axis Camera.

On another note: I tried the SSD mobilenetv2 tflite model provided in the acap model zoo, and it had a different architecture than the one I trained from the tensorflow model zoo which made the acap tflite model significantly faster than tensorflow's tflite model on my axis camera. I just wanted to ask where can I get the tensorflow weights for the SSD mobilenetv2 tflite model, so that I can train it on my custom data.

HardcoreBudget avatar Jul 31 '24 14:07 HardcoreBudget

Hi @HardcoreBudget

Here you can find some guidelines on how to post-process yolo output: https://github.com/AxisCommunications/axis-model-zoo/discussions/45

Not sure if I understood the second part, what version was faster? And what weights are you looking for?

Corallo avatar Jul 31 '24 17:07 Corallo

Hello @Corallo Thanks for your response. I'll look into it and get back to you. As for my second request, the acap version for SSD mobilenetv2 was much faster, so I was looking for the model weights and graph so that I can train it on my custom data.

HardcoreBudget avatar Jul 31 '24 17:07 HardcoreBudget

It is most likely that your version of SSD mobilenetv2 was fine, but quantized by channel. That can have a large impact on latency. See the documentation for more info about how to quantize correctly for Artpec-8.

Corallo avatar Aug 01 '24 15:08 Corallo

@Corallo Thanks, I was able to run the yolov5n model successfully.

Now for the SSD mobilenetv2, I have been quantizing tensorflow's zoo SSD mobilenetv2 using per tensor as recommended in the documentation, but the architecture of acap's model is different when viewing it on netron and much faster on the camera.

HardcoreBudget avatar Aug 05 '24 07:08 HardcoreBudget

Hi,

That version of ssd mobilenet comes from here: https://coral.ai/models/object-detection/ This should be its config file for TensorFlow: https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v2_quantized_300x300_coco.config

Corallo avatar Aug 05 '24 11:08 Corallo