tflite-micro icon indicating copy to clipboard operation
tflite-micro copied to clipboard

TFLM inference results abnormal

Open Unbinilium opened this issue 7 months ago • 6 comments

Hello, recently I encountered an issue when deploying the model from YOLO-World to a device using TFLM. I found that with the same INT8 per-channel quantized TFLite model and using the same image tensor as input, there is a significant discrepancy between the output tensors from TFLM inference and tensorflow.lite.Interpreter.

image

As shown in the figure, the model has 6 outputs, and the histograms in blue and orange represent the INT8 tensors obtained from tensorflow.lite.Interpreter and TFLM inference, respectively. In the INT8 space, the proportion of inconsistent data exceeds even 1/3.

image

However, after undergoing complex post-processing, the actual observed result shows only a few pixels' offset in the bounding boxes.

And I modified the flatbuffer of the model to pre-fetch outputs of certain tensors:

image

It can be observed that errors have already occurred in shallow-level operations. As the network deepens, accumulated errors may lead to inaccuracies in the final results.

image

Although the hacked logistic implementation in TFLM is different from TFLite (perhaps more, they are stored in the repository with the same name and path, it is likely to be misunderstood if you don't open these files to confirm the implementation), the +/-1 offsets of these results after the convolution makes me feel a little confused.

Is it some mistake in TFLM? If you have any debugging suggestions or solutions, please let me know. Thanks!

Test environments:

  • tensorflow 2.16.2
  • tflite-micro https://github.com/tensorflow/tflite-micro/commit/7a0249686f412551634a5058ddd6d2ec3f224203
  • clang 14.0.0
  • python 3.10.12

Issues may related to:

  • https://github.com/tensorflow/tflite-micro/issues/2319

Unbinilium avatar Jul 17 '24 08:07 Unbinilium