models icon indicating copy to clipboard operation
models copied to clipboard

IndexError: invalid index to scalar variable.

Open Siddhijain16 opened this issue 4 years ago • 23 comments

Hi, I used the eager_few_shot_od_training_tflite.ipynb to train a model then generate a (downloadable) TensorFlow Lite model for on-device inference using the same predefined dataset files(ducky) as described in the notebook. I am able to generated model.tflite file but during testing its giving me error IndexError: invalid index to scalar variable.

image I tried to print the values return by the detect function but its all 0.0 Please suggest me what should I do ..

image

Siddhijain16 avatar Feb 01 '21 07:02 Siddhijain16

@Siddhijain16 It seems you are iterating the testing data which is a scalar value. Please have a look at this issue for more information.

Hope it helps you. Thanks!

saikumarchalla avatar Feb 01 '21 09:02 saikumarchalla

@Siddhijain16 It seems you are iterating the testing data which is a scalar value. Please have a look at this issue for more information.

Hope it helps you. Thanks! Hi @saikumarchalla , I am using the exactly same file (without doing any modification) https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tflite.ipynb to train a model on a novel class(ducky) .But its generating me IndexError that I am not getting why it is giving me like this...

test_image_dir = 'models/research/object_detection/test_images/ducky/test/' test_images_np = [] for i in range(1, 50): image_path = os.path.join(test_image_dir, 'out' + str(i) + '.jpg') #print(image_path) test_images_np.append(np.expand_dims( load_image_into_numpy_array(image_path), axis=0))

Again, uncomment this decorator if you want to run inference eagerly

def detect(interpreter, input_tensor): """Run detection on an input image.

Args: interpreter: tf.lite.Interpreter input_tensor: A [1, height, width, 3] Tensor of type tf.float32. Note that height and width can be anything since the image will be immediately resized according to the needs of the model within this function.

Returns: A dict containing 3 Tensors (detection_boxes, detection_classes, and detection_scores). """ input_details = interpreter.get_input_details() output_details = interpreter.get_output_details()

We use the original model for pre-processing, since the TFLite model doesn't

include pre-processing.

preprocessed_image, shapes = detection_model.preprocess(input_tensor) interpreter.set_tensor(input_details[0]['index'], preprocessed_image.numpy())

interpreter.invoke()

boxes = interpreter.get_tensor(output_details[0]['index']) classes = interpreter.get_tensor(output_details[1]['index']) scores = interpreter.get_tensor(output_details[2]['index']) return boxes, classes, scores

Load the TFLite model and allocate tensors.

interpreter = tf.lite.Interpreter(model_path="tflite/model.tflite") interpreter.allocate_tensors()

Note that the first frame will trigger tracing of the tf.function, which will

take some time, after which inference should be fast.

label_id_offset = 1 for i in range(len(test_images_np)): input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32) boxes, classes, scores = detect(interpreter, input_tensor)

plot_detections( test_images_np[i][0], boxes[0], classes[0].astype(np.uint32) + label_id_offset, scores[0], category_index, figsize=(15, 20), image_name="gif_frame_" + ('%02d' % i) + ".jpg")

image

Siddhijain16 avatar Feb 01 '21 11:02 Siddhijain16

Hey @Siddhijain16 how many classes have you trained on? Also, it looks like your TFLite model hasn't come out right. Could you download it and share, if possible (no need to fully train)?

srjoglekar246 avatar Feb 03 '21 17:02 srjoglekar246

Hi @srjoglekar246 For a single class duck. here is my model model.zip

I want to trained for my custom classes but before that I tried to train it on the same class (ducky) given in that notebook.

Siddhijain16 avatar Feb 03 '21 17:02 Siddhijain16

@Siddhijain16 What version of TF are you running? We added this support recently, so maybe try the latest nightly (or TF 2.4 or later)?

srjoglekar246 avatar Feb 03 '21 17:02 srjoglekar246

@srjoglekar246 I am using TF nightly2.5.0-dev20210203

Siddhijain16 avatar Feb 03 '21 17:02 Siddhijain16

That is an empty model. Hence, it always returns 0. If you used TF version > 2.3 while generating the model (was it in the Colab environment?), then the model should be larger than 1.5kb. You need TF nightly while generating/converting the model too, then things should work.

srjoglekar246 avatar Feb 03 '21 18:02 srjoglekar246

Yes it is in Colab environment and I am using TF nightly2.5.0-dev20210203 which is > 2.3 ,so why it is generating empty model ?

Siddhijain16 avatar Feb 03 '21 18:02 Siddhijain16

Hi @srjoglekar246 I am now able to generate tflite model file of size 11.5 MB but in an app its not working .. Any suggestion...

Siddhijain16 avatar Feb 04 '21 18:02 Siddhijain16

What is the error? How do you use it in the app?

srjoglekar246 avatar Feb 04 '21 18:02 srjoglekar246

@srjoglekar246 I made flutter app using some boilerplate code. In that code I pass my tflite model file. Have you also made the app? Is it working fine in your app?

Siddhijain16 avatar Feb 04 '21 18:02 Siddhijain16

Aah no I don't have a lot of experience with the app, I just work on the TensorFlow Lite team :-) Does the model now work in the Colab?

For the app, take a look at this code from our android detection example.

srjoglekar246 avatar Feb 04 '21 19:02 srjoglekar246

Aah no I don't have a lot of experience with the app, I just work on the TensorFlow Lite team :-) Does the model now work in the Colab?

For the app, take a look at this code from our android detection example.

Yes it's working in colab. Thanks I will go through those links..

Siddhijain16 avatar Feb 05 '21 13:02 Siddhijain16

Hi @srjoglekar246 I have one doubt , the size of my actual model file (saved_model.pb) is of 11 MB but when I converted into tflite, it's size become 50 MB. As it has to be compressed but it's size increased. Why is it so ? Any suggestion .... I used this piece of code to convert my model into tflite-

import tensorflow as tf converter = tf.lite.TFLiteConverter.from_saved_model('/content/drive/MyDrive/DED_ped/new_tflite_final/saved_model/') converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() open('model.tflite', 'wb').write(tflite_model)

Siddhijain16 avatar Feb 05 '21 15:02 Siddhijain16

@Siddhijain16 TFLite stores the model's weights along with the graph, so that no other file is needed at runtime. The SavedModel you refer to might not.

If you want a smaller model, feel free to look at the other SSDs in the TF2 Detection Zoo (with smaller input dimensions). Also, which model are you using right now? SSD ResNet? 50MB after quantization (tf.lite.Optimize.DEFAULT) is a lot - what is the size of the model if you remove the converter.optimizations line?

srjoglekar246 avatar Feb 08 '21 17:02 srjoglekar246

@srjoglekar246 Yes ,I'm using ssd_resnet50_v1. I did the dynamic range quantization. When I remove the converter.optimizations line it generate tflite model file with 193.26 MB.

Siddhijain16 avatar Feb 08 '21 19:02 Siddhijain16

I am new to tensorflow and tflite in general I got the same problem as this issue My script running on google colab Tensorflow 2.7.0-dev20210914

Anyone to assist

image

dpchami avatar Sep 15 '21 07:09 dpchami

Sorry for dropping the ball here, looks like this got lost in my email.

@dpchami can you paste the commands/code you are running when you got this error? I can take a look & reproduce it for debugging.

srjoglekar246 avatar Sep 15 '21 16:09 srjoglekar246

Sorry for dropping the ball here, looks like this got lost in my email.

@dpchami can you paste the commands/code you are running when you got this error? I can take a look & reproduce it for debugging.

Thanks for the reply. Here is the code result to my issue @srjoglekar246

image

image

dpchami avatar Sep 16 '21 07:09 dpchami

@dpchami Can you print interpreter.get_output_details()? It is possible that due to certain changes in the converter recently, the order of outputs might have changed. So the snippet of the code that obtains the boxes, classes & scores tensors might be wrong (in essence you might be interpreting the boxes or classes tensor as scores, or something like that)

srjoglekar246 avatar Sep 16 '21 16:09 srjoglekar246

@dpchami There is print of interpreter.get_output_details():

[{'dtype': numpy.float32, 'index': 335, 'name': 'StatefulPartitionedCall:1', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([ 1, 10], dtype=int32), 'shape_signature': array([ 1, 10], dtype=int32), 'sparsity_parameters': {}}, {'dtype': numpy.float32, 'index': 333, 'name': 'StatefulPartitionedCall:3', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([ 1, 10, 4], dtype=int32), 'shape_signature': array([ 1, 10, 4], dtype=int32), 'sparsity_parameters': {}}, {'dtype': numpy.float32, 'index': 336, 'name': 'StatefulPartitionedCall:0', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([1], dtype=int32), 'shape_signature': array([1], dtype=int32), 'sparsity_parameters': {}}, {'dtype': numpy.float32, 'index': 334, 'name': 'StatefulPartitionedCall:2', 'quantization': (0.0, 0), 'quantization_parameters': {'quantized_dimension': 0, 'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32)}, 'shape': array([ 1, 10], dtype=int32), 'shape_signature': array([ 1, 10], dtype=int32), 'sparsity_parameters': {}}]

I have same problem

dimka11 avatar Feb 09 '22 11:02 dimka11

It is due to the output of detection TFLite models changed. I followed below comment to solve the issue. https://github.com/tensorflow/tensorflow/issues/44481#issuecomment-974325834

boxes = interpreter.get_tensor(output_details[1]['index'])
classes = interpreter.get_tensor(output_details[3]['index'])
scores = interpreter.get_tensor(output_details[0]['index'])

twinssbc avatar Mar 13 '22 13:03 twinssbc

It is due to the output of detection TFLite models changed. I followed below comment to solve the issue. tensorflow/tensorflow#44481 (comment)

boxes = interpreter.get_tensor(output_details[1]['index'])
classes = interpreter.get_tensor(output_details[3]['index'])
scores = interpreter.get_tensor(output_details[0]['index'])

worked thanks. solved the problem for rubber ducky tflite modification

ysumiit005 avatar Oct 04 '23 05:10 ysumiit005