flutter_tflite icon indicating copy to clipboard operation
flutter_tflite copied to clipboard

Using quantized tflite models

Open bjoernholzhauer opened this issue 5 years ago • 19 comments

When I substitute a quantized model into code that works for image classification with the non-quantized model (I simply substituted 'mobilenet_v2_1.0_224_quant.tflite' for 'Mobilenet_V2_1.0_224'), I get: Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32).

Is it possible to use quantized models? If so, how (would be good to have something in the documentation)? If not, would also be good if the documentation said so - or if it is just not possible at the moment, but hopefully in the future, I guess consider this a feature request.

bjoernholzhauer avatar Sep 14 '19 13:09 bjoernholzhauer

Based on some searching of issues last night #53 and #59 both are related to this issue. The automl vision edge outputs a quantized tflite model.

Here are two images from netron describing the differences between the quantized model and the mobilenet v2 model that flutter_tflite currently supports

Screen Shot 2019-10-19 at 9 44 55 PM Screen Shot 2019-10-19 at 9 44 37 PM

Note that they are the same except one accepts a uint8 list and the other takes a float32 list.

I'm not entirely sure what would need to change on the flutter tflite side to support this kind of model but hopefully this helps

securingsincity avatar Oct 20 '19 14:10 securingsincity

I'm having the same problem and wondering if there is a way or workaround to handle those models?

Statyk7 avatar Nov 14 '19 15:11 Statyk7

For what I have seen, if you use the method to run the detections on binary you can use a quantized model. Actually, the conversion from image to ByteList suggested in the docs is made considering 8-bit integer as unit, as you can see below:

Uint8List imageToByteListUint8(img.Image image, int inputSize) {
  var convertedBytes = Uint8List(1 * inputSize * inputSize * 3);
  var buffer = Uint8List.view(convertedBytes.buffer);
  int pixelIndex = 0;
  for (var i = 0; i < inputSize; i++) {
    for (var j = 0; j < inputSize; j++) {
      var pixel = image.getPixel(j, i);
      buffer[pixelIndex++] = img.getRed(pixel);
      buffer[pixelIndex++] = img.getGreen(pixel);
      buffer[pixelIndex++] = img.getBlue(pixel);
    }
  }
  return convertedBytes.buffer.asUint8List();
}

This conversion should work for a quantized model, but is not working for a non-quantized one. The convertedBytes List should be 4 times the one is being suggested to work for non-quantized models.

When I use the detections on image path it works perfectly.

Edit: For non-quantized models the docs suggest:

Uint8List imageToByteListFloat32(
    img.Image image, int inputSize, double mean, double std) {
  var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
  var buffer = Float32List.view(convertedBytes.buffer);
  int pixelIndex = 0;
  for (var i = 0; i < inputSize; i++) {
    for (var j = 0; j < inputSize; j++) {
      var pixel = image.getPixel(j, i);
      buffer[pixelIndex++] = (img.getRed(pixel) - mean) / std;
      buffer[pixelIndex++] = (img.getGreen(pixel) - mean) / std;
      buffer[pixelIndex++] = (img.getBlue(pixel) - mean) / std;
    }
  }
  return convertedBytes.buffer.asUint8List();
}

waltermaldonado avatar Nov 20 '19 14:11 waltermaldonado

Have you been able to run the MobileNet quantized version? Can be found here: https://www.tensorflow.org/lite/guide/hosted_models I have no success with Mobilenet_V1_1.0_224_quant :( I have tried with runModelOnImage and with runModelOnBinary using the image to byte functions... no results... (and no errors)

But when using the TensorFlow iOS Sample App it works just fine! https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/ios

Statyk7 avatar Nov 21 '19 19:11 Statyk7

No, I've never tried those models, but I think them should work aswell. Let us see your code, maybe we can find something...

waltermaldonado avatar Nov 22 '19 01:11 waltermaldonado

I'm using the example provided with the tflite package: https://github.com/shaqian/flutter_tflite/tree/master/example

With an additional asset for the model (the labels are the same than for the non-quantized model) in pubspec.yaml: - assets/mobilenet_v1_1.0_224_quant.tflite

Then I load the quantized model instead of the non-quantized one in main.dart:loadModel: default: res = await Tflite.loadModel( model: "assets/mobilenet_v1_1.0_224_quant.tflite", labels: "assets/mobilenet_v1_1.0_224.txt",

That's it!

Statyk7 avatar Nov 22 '19 14:11 Statyk7

Just to clarify, is your non-quantized model a detection model (localization + classification)? Because it seems to me that those quantized models are classification only models.

waltermaldonado avatar Nov 22 '19 15:11 waltermaldonado

It's an image classification model I believe...

Statyk7 avatar Nov 26 '19 14:11 Statyk7

@Statyk7 @waltermaldonado

I'm integrating my own custom model in this example but the app crashes when we send an image to the model using method segmentMobileNet.

I have also tried with [runModelOnBinary] but issue still stand.

My custom model is trained on PyTorch and I have converted into Tensorflow using onnx and then in .tflite. Model is not quantized.

codulers avatar Feb 07 '20 09:02 codulers

I created an image labeling model with AutoML, and since the model should have been quantized, I converted the image to uint8, but the following error was output. Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [(F (which is compatible with the TensorFlowLite type FLOAT32) .

yumemi-RyoShimizu avatar Feb 11 '20 22:02 yumemi-RyoShimizu

I'm having the same problem!!! is there any updates of this?

andrsdev avatar Feb 28 '20 16:02 andrsdev

Here are my Auto ML properties

Throws error Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [(F (which is compatible with the TensorFlowLite type FLOAT32) .

Screen Shot 2020-02-28 at 10 55 18 AM

andrsdev avatar Feb 28 '20 16:02 andrsdev

Do u have problem with AutoMl generated tflite file on ios?

oncul avatar Apr 26 '20 03:04 oncul

@AndrsDev I have the same error here

L-is-0 avatar Apr 29 '20 23:04 L-is-0

Did anyone find a solution to this?

Also getting: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [(F (which is compatible with the TensorFlowLite type FLOAT32)

I'm trying with this model, and livestreamed camera image (YUV on android): https://tfhub.dev/google/lite-model/aiy/vision/classifier/birds_V1/2

The page states:

Inputs are expected to be 3-channel RGB color images of size 224 x 224, scaled to [0, 1]. This model outputs to image_classifier.

I've tried a million things now and I can't get it to work. If I try to convert the streamed image to RGB, I get the UINT8/FLOAT32-error above.

joknjokn avatar Jun 25 '20 14:06 joknjokn

I solved this by using tflite_flutter and tflite_flutter_helper instead of this library. Here is a gist in case anyone is running into this as well: https://gist.github.com/Bryanx/b839e3ceea0f9647ffbc5f90e3091742.

Bryanx avatar Sep 01 '20 17:09 Bryanx

@Bryanx Do you think tflite_flutter_helper alone would solve the issue? I.e. is it compatible with this library?

tobiascornille avatar Mar 29 '21 14:03 tobiascornille

use this code to train you custom model

import os

import numpy as np

import tensorflow as tf assert tf.version.startswith('2')

from tflite_model_maker import model_spec from tflite_model_maker import image_classifier from tflite_model_maker.config import ExportFormat from tflite_model_maker.config import QuantizationConfig from tflite_model_maker.image_classifier import DataLoader

import matplotlib.pyplot as plt

#to unzip a rar !unzip path-of-zip-file -d path-to-save-extract-file

data = DataLoader.from_folder('path-of-custom-folder') train_data, rest_data = data.split(0.8) validation_data, test_data = rest_data.split(0.5) model = image_classifier.create(train_data, validation_data=validation_data) loss, accuracy = model.evaluate(test_data) config = QuantizationConfig.for_float16() model.export(export_dir='path-to-save-model', quantization_config=config,export_format=ExportFormat.TFLITE) model.export(export_dir='path-to-save-label', quantization_config=config,export_format=ExportFormat.LABEL)

zoraiz-WOL avatar Nov 17 '21 07:11 zoraiz-WOL

use this code to train you custom model

import os

import numpy as np

import tensorflow as tf assert tf.version.startswith('2')

from tflite_model_maker import model_spec from tflite_model_maker import image_classifier from tflite_model_maker.config import ExportFormat from tflite_model_maker.config import QuantizationConfig from tflite_model_maker.image_classifier import DataLoader

import matplotlib.pyplot as plt

#to unzip a rar !unzip path-of-zip-file -d path-to-save-extract-file

data = DataLoader.from_folder('path-of-custom-folder') train_data, rest_data = data.split(0.8) validation_data, test_data = rest_data.split(0.5) model = image_classifier.create(train_data, validation_data=validation_data) loss, accuracy = model.evaluate(test_data) config = QuantizationConfig.for_float16() model.export(export_dir='path-to-save-model', quantization_config=config,export_format=ExportFormat.TFLITE) model.export(export_dir='path-to-save-label', quantization_config=config,export_format=ExportFormat.LABEL)

You just saved my life. Thank You !!!!!

aboubacryba avatar Nov 17 '21 17:11 aboubacryba