depthai-python
depthai-python copied to clipboard
How to get output as FP32?
I see getLayerFp16(self, name) in docs, It use to get output as float16, but how to get output as float32 ? I cannot see any function like getLayerFp32() :((
I'm not sure if this will be possible at least directly in OpenVINO as I think the Myriad X only supports FP16 for neural operations. What is the use-case for FP32? Is it possible to convert from FP16 to FP32 on the host in your case?
The Ouput when I run on OpenVINO is FP32 so when i run blob model, output is FP16 and accuracy very low. I don't know if the data type is the reason :(((( Note: I try convert SCRFD model from onnx -> openvino -> blob
Ah got it. Unsure. I'm wondering if @PINTO0309 would be best to comment here. Will ask. He may already know how to do this.
I asked in Discord, here, in the ai_ml_cv channel, here. (Note you have to join via the first link to be able to access the second link.)
Rahul there responded quickly:
I think you have to disable FP16 when converting the model
You've already got your answer. But unfortunately, I don't think the problem will be solved.
There are two possible conversion routes, both of which degrade to Float16 accuracy when reasoning with Myriad. Even if we could convert the output to Float32, the inference would be at Float16 accuracy, so the accuracy would inevitably deteriorate. The reason for doing inference in Float16 is that Myriad's hardware only supports Float16. It is known that the accuracy of models for skeletal and facial keypoint detection is significantly degraded when the accuracy is converted from Float32 to Float16. This is because the number of significant digits in floating point is small, and the values that can be used during inference are discarded, losing all the smaller values that are originally needed. Keypoint-based models are significantly less suitable for quantization.
onnx (FP32) -> OpenVINO (FP16) -> Myriad blob (FP16/U8) input -> Myriad blob (FP16) inference -> Myriad blob (FP16) output
onnx (FP32) -> OpenVINO (FP32) -> Myriad blob (FP16/U8) input -> Myriad blob (FP16) inference -> Myriad blob (FP16) output
Is it this model? https://github.com/deepinsight/insightface