flutter-tflite icon indicating copy to clipboard operation
flutter-tflite copied to clipboard

If the newest version support float model?

Open zhaoxiaohai opened this issue 1 year ago • 2 comments

I have an Android application, which uses the example . In this application, I use my own float model, which can run normally. When I try to use flutter for development according to your current example with the same model, the results are not ideal. My understanding is that the your demo uses a quant model. I want to know whether there is a problem with my usage or my model. Thank you.

zhaoxiaohai avatar Dec 27 '23 10:12 zhaoxiaohai

A loss of precision due to quantization may have an impact on the model's performance. Verify whether the quantized model from the Flutter example differs noticeably from the float model you started with. You may wish to experiment with alternative quantization settings or even utilize the original float model in your Flutter application if the quantization procedure generates too much inaccuracy.

anish891 avatar Jan 21 '24 20:01 anish891

@zhaoxiaohai I am not sure if my answer is correct. But here is my thought: the example uses unsigned integer 8 bit list to declare the type of the image (decoded image). The original model or "float" model is a 32 bit model, which means there is a huge gap in precision. Quantization shrinks down the model from 32 bit to 8 bit to befit the images that you feed to the model (I would call this bit-overflows). I used to get some bad fluke ML inferences in which they only predict 1 label for every object I tried. I guess this is not about problem with the model, but how we implement it. Fun thing to do: try declaring the input image as Uint32List and predict it with your orginal or float model (I have never tried it before). But my speculation is that either the model or the input image staying in 8 bit is to optimize the performance of tflite models on Edge Mobile Devices, which have resource constraints. If your fun implementation doesn't work, you are more than welcome to quantize the model for your smoother implementation.

qu-ngx avatar Feb 17 '24 03:02 qu-ngx