Android-TensorFlow-Lite-Example icon indicating copy to clipboard operation
Android-TensorFlow-Lite-Example copied to clipboard

Cannot convert between a TensorFlowLite buffer with XXX bytes and a ByteBuffer with XXX bytes

Open moster67 opened this issue 7 years ago • 22 comments

Hi, many thanks for the article and the sample-code.

It works fine with the model mentioned in your project. However, I trained my own model (using the tensorflow-for-poets 1 and 2 tutorials) but I get an error using my model with your code:

"Cannot convert between a TensorFlowLite buffer with XXX bytes and a ByteBuffer with XXX bytes."

This happens when running the following statement: interpreter.run(byteBuffer, result);

My model works fine with the sample-project in "the tensorflow-for-poets2 tutorial.

Just wondering what can be the issue. Any ideas?

Thanks.

moster67 avatar Aug 28 '18 20:08 moster67

Same problem here. I tried all the steps and guidelines in these links but still, nothing seems to work... 1.https://github.com/tensorflow/tensorflow/issues/14719#issuecomment-348991399 2.https://github.com/tensorflow/tensorflow/issues/14719

naris96 avatar Sep 02 '18 04:09 naris96

I resolved it by using this modified class: https://github.com/COSE471/COSE471_android/blob/master/app/src/main/java/com/example/android/alarmapp/tflite/TensorFlowImageClassifier.java

moster67 avatar Sep 03 '18 11:09 moster67

@moster67 Still isn't working. I have the similar code approach as well. I trained my own model. It's funny that same number and layer in a CNN architecture on different image optimization give different results.

naris96 avatar Sep 04 '18 12:09 naris96

I resolved it by using this modified class: https://github.com/COSE471/COSE471_android/blob/master/app/src/main/java/com/example/android/alarmapp/tflite/TensorFlowImageClassifier.java

Could you elaborate your solution? Stuck on the same problem, and i don't see what exactly we are replacing here.

Nevermind, solved it by noting that with floats we are using 4 times as many bytes

EXJUSTICE avatar Sep 13 '18 02:09 EXJUSTICE

@EXJUSTICE you are right.. changing the value type from int to float fixed it for me

divSivasankaran avatar Nov 08 '18 08:11 divSivasankaran

@div1090 will you please elaborate specifically what values are to be changed ?

Tanv33rA avatar Nov 13 '18 17:11 Tanv33rA

@Tanv33rA In the function convertBitmapToByteBuffer

  • remember that we would need 4 bytes for each value if our datatype is float Replace ByteBuffer byteBuffer = ByteBuffer.allocateDirect(BATCH_SIZE * inputSize * inputSize * PIXEL_SIZE); with
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * BATCH_SIZE * inputSize * inputSize * PIXEL_SIZE);

  • Also, there's a separate function to add float values to the byte buffer. Replace byteBuffer.put with byteBuffer.putFloat.

This ought to fix the problem!

divSivasankaran avatar Nov 15 '18 06:11 divSivasankaran

My recent PR has added support for float models. Simply change the variable QUANT to false in TensorFlowImageClassifier.java along with changing the model and lebels name.

soum-io avatar Feb 05 '19 16:02 soum-io

Check which values ​​return methods getImageSizeX() and getImageSizeY() in your ImageClassifier class. And compare them with the MobileNet heading of the model you are using as pre-trained model (https://www.tensorflow.org/lite/models)

For example, for model Mobilenet_V1_0.25_192 the following constant values ​​should be set static final int DIM_IMG_SIZE_X = 192; static final int DIM_IMG_SIZE_Y = 192;

getImageSizeX() = 192 getImageSizeY() = 192

SergeyKarleev avatar Feb 06 '19 11:02 SergeyKarleev

thank you so much @soum-io now it works perfectly fine

krishnachourasia avatar Feb 07 '19 23:02 krishnachourasia

I was facing the exact same issue with values: "cannot convert between a tensorflow lite buffer with 602112 bytes and a bytebuffer with 150528 bytes"

So the problem was that i converted my mobilenet model with python api (for tf lite conversion) but when i used the command line api for the same, it worked.

Hope it helps.

The command line api is available at :

https://www.tensorflow.org/lite/convert/cmdline_examples

saurabhg476 avatar Aug 08 '19 10:08 saurabhg476

I also downloaded the Tensorflow for Poets 2 github repository. While I was trying to place my graph and labels into the tflite-app, I got this error.

I resolved the issue by following @SergeyKarleev's answer. For me, changing the static final int DIM_IMG_SIZE_X and static final int DIM_IMG_SIZE_Y values to 299 was the answer. These values can be found in the ImageClassifier.java document in the android folder.

I guess that not setting the IMG_SIZE correctly while following the tutorial is what causes the issue.

I don't know if this issue is still unresolved, but I thought I'd share my fix anyway.
Hope it helps.

dvbeelen avatar Aug 29 '19 12:08 dvbeelen

I created a custom model using Google Vision API and the expected input size for that model was 512x512 as opposed to the 300x300 in MobileSSDNet (default with TFLite example). I changed private static final int TF_OD_API_INPUT_SIZE = 512; in DetectorActivity.java and also updated private static final int NUM_DETECTIONS = 20; in TFLiteObjectDetectionAPIModel.java. These two amends solved it for me. Hope it helps someone in the future.

NightFury13 avatar Oct 01 '19 14:10 NightFury13

I use the code base and found same issue from I do download my trained model from Azure Custom Vision to use instead of default model of the project defined. I would like to tell you first here, I'm newbie to ML and Tensorflow thing.

URL https://github.com/xenogew/examples/tree/master/lite/examples/object_detection/android

Yes, I know this is Object Detection, not Image classification like this issue. But What I google and only found is here the best fit with keyword.

Here is error message. Cannot convert between a TensorFlowLite buffer with 2076672 bytes and a Java Buffer with 270000 bytes.

I tried to read and do along with all yours comment and found this row in the code. d.imgData = ByteBuffer.allocateDirect(1 * d.inputSize * d.inputSize * 3 * numBytesPerChannel);

the value in formula is reflect with the later number of error message. I tried to change to exactly number of former number in error message but found a new error. Cannot copy between a TensorFlowLite tensor with shape [1, 13, 13, 55] and a Java object with shape [1, 10, 4].

So, how can I change to my own model I trained in Azure to change on the default model in example and can use it without error? Which code I should change to make the application to be executable.

xenogew avatar Oct 17 '19 07:10 xenogew

THE SAME ERROR :((

Models (both as float and quantizied) have been constructed with the codes at: https://github.com/frogermcs/TFLite-Tester/blob/master/notebooks/Testing_TFLite_model.ipynb

However, the app give this error: "cannot convert between a tensorflow lite buffer with 602112 bytes and a bytebuffer with 150528 bytes"

PLEASE HELP ME TO SOLVE THIS ISSUE :(((

smone000 avatar Feb 04 '20 13:02 smone000

I try all the above steps but not working
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 3136 bytes and a Java Buffer with 9408 bytes. image I have changed the model , labels and input size ... @soum-io @moster67 @SergeyKarleev

harsh204016 avatar Feb 09 '20 18:02 harsh204016

@xenogew I have exactly the same error. Did you find the solution?

SolArabehety avatar Jun 10 '20 18:06 SolArabehety

i also got

Process: org.tensorflow.lite.examples.detection, PID: 30742
    java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (input_2) with 602112 bytes from a Java Buffer with 1080000 bytes.

i have try change private static final boolean TF_OD_API_IS_QUANTIZED = false; from true to false

line error

at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:196)
        at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:181)

when i try change to float like

    tfLite.runForMultipleInputsOutputs(new float[][]{new float[]{Float.parseFloat(Arrays.toString(inputArray))}}, outputMap);

i got new problem java.lang.NumberFormatException: For input string: "[java.nio.DirectByteBuffer[pos=1080000 lim=1080000 cap=1080000]]"

yogithesymbian avatar Aug 24 '20 07:08 yogithesymbian

Hello guys, I also had the similar problem yesterday. I would like to mention solution which works for me.

Seems like TSLite only support exact square bitmap inputs Like size 256* 256 detection working size 256* 255 detection not working throwing exception

And max size which is supported 257*257 should be max width and height for any bitmap input

Here is the sample code to crop and resize bitmap

private var MODEL_HEIGHT = 257 private var MODEL_WIDTH = 257

Crop bitmap val croppedBitmap = cropBitmap(bitmap)

Created scaled version of bitmap for model input val scaledBitmap = Bitmap.createScaledBitmap(croppedBitmap, MODEL_WIDTH, MODEL_HEIGHT, true)

https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L578

Crop Bitmap to maintain aspect ratio of model input.
`private fun cropBitmap(bitmap: Bitmap): Bitmap { val bitmapRatio = bitmap.height.toFloat() / bitmap.width val modelInputRatio = MODEL_HEIGHT.toFloat() / MODEL_WIDTH var croppedBitmap = bitmap

// Acceptable difference between the modelInputRatio and bitmapRatio to skip cropping.
val maxDifference = 1e-5

// Checks if the bitmap has similar aspect ratio as the required model input.
when {
  abs(modelInputRatio - bitmapRatio) < maxDifference -> return croppedBitmap
  modelInputRatio < bitmapRatio -> {
    // New image is taller so we are height constrained.
    val cropHeight = bitmap.height - (bitmap.width.toFloat() / modelInputRatio)
    croppedBitmap = Bitmap.createBitmap(
      bitmap,
      0,
      (cropHeight / 2).toInt(),
      bitmap.width,
      (bitmap.height - cropHeight).toInt()
    )
  }
  else -> {
    val cropWidth = bitmap.width - (bitmap.height.toFloat() * modelInputRatio)
    croppedBitmap = Bitmap.createBitmap(
      bitmap,
      (cropWidth / 2).toInt(),
      0,
      (bitmap.width - cropWidth).toInt(),
      bitmap.height
    )
  }
}
return croppedBitmap

} https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L451`

Hope it helps Thanks and Regards Pankaj

pkpdeveloper avatar Aug 29 '20 04:08 pkpdeveloper

@pkpdeveloper can you give me source about the max size supported? Thanks

krn-sharma avatar Oct 04 '20 20:10 krn-sharma

This worked for me image

Did not dig too deep into why it works but I got back a prediction from my model

Edit the method in this file to the one below

https://github.com/COSE471/COSE471_android/blob/master/app/src/main/java/com/example/android/alarmapp/tflite/TensorFlowImageClassifier.java

image

Edit: remove the D typo in the image above (Must have pressed a key when copying screen)

  • I trained my model with TensorFlow
  • Converted it on python with tf.lite.TFLiteConverter.from_saved_model
  • Copied the tflite file over to the android assets folder
  • Create a file called labels.txt (Holds cat and dog) and put that in the assets folder
  • Found an image of a cat online convert to 128 x 128 because this is what my model used
  • Copied that file into my drawable folder
  • Used the code you see above.
  • Worked like a charm:-)

Hopefully, that helps someone

pk-development avatar Oct 24 '20 21:10 pk-development

Resizing the Buffer works for me

Bitmap resized = Bitmap.createScaledBitmap(bitmap, 300, 300, true);

here 300, 300 is the size of the input matrix of the model

kartikeysaran avatar Sep 11 '21 07:09 kartikeysaran