onnx2tflite
onnx2tflite copied to clipboard
Extract score and class_id from converted model
Hi, I'm using this package to convert this onnx model to tflite (model) but I'm not able to explore result and extract score and class_id
command used to convert model :
python converter.py --weights "./model.onnx" --outpath "./save_path"
full code :
#!/usr/bin/env python
import traceback
import cv2
import numpy as np
import tensorflow as tf
def run_model():
try:
interpreter = tf.lite.Interpreter(model_path="model.tflite")
interpreter.allocate_tensors()
image = cv2.imread('image.PNG')
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
image = cv2.resize(image, (input_shape[1], input_shape[2]))
test_image = np.expand_dims(image, axis=0)
input_data_type = input_details[0]['dtype']
padded_image = np.ascontiguousarray(test_image, dtype=np.float32)
input_tensor_index = input_details[0]['index']
interpreter.set_tensor(input_tensor_index, padded_image.astype(input_data_type))
interpreter.invoke()
tflite_model_predictions1 = interpreter.get_tensor(output_details[0]['index'])
print(tflite_model_predictions1)
except Exception:
traceback.print_exc()
return "There was an error processing the file"
if __name__ == "__main__":
run_model()
the output of interpreter.get_tensor is something like this
index 0 : [1.4157819747924805, 2.1544673442840576, ...... ] : size = 7098
index 1 : [1.805556058883667, 1.946408987045288, ........ ] : size = 7098
index 2 : [1.0974576473236084, 1.980888843536377, ....... ] : size = 7098
..
...
index 20 : [0.0024664357770234, 0.007526415400207, ...... ] : size = 7098
Question :
if someone can help how I can extract the class_id and score from the tflite_model_predictions output ? NOTE : in original git (onnx model ) for given image example score is 0.89389664 and class_id is 7
Thanks for helping !!
Hello, your code is missing preprocessing and post-processing. In addition, the output of TFLite is channel-last, ONNX, and PyTorch is channel-first.
Hello, your code is missing preprocessing and post-processing. In addition, the output of TFLite is channel-last, ONNX, and PyTorch is channel-first.
Thanks @MPolaris for yout quick reply, i'm beginning in machine learning:( , I have try to use same preprocessing and post-processing of existing onnx model, but not able to success, it will be appreciated if someone can help to return the correct code to make it work with tflite model
You'd better carefully check the dimensions of the output. For example, the output's shape is (B, C, H, W) in onnx, and in tflite is (B, H, W, C), so you should do that:
tflite_output = interpreter.get_tensor(index)
onnx_format_output = tflite_output.tranpsoe(0, 3, 1, 2)
You'd better carefully check the dimensions of the output. For example, the output's shape is (B, C, H, W) in onnx, and in tflite is (B, H, W, C), so you should do that:
tflite_output = interpreter.get_tensor(index) onnx_format_output = tflite_output.tranpsoe(0, 3, 1, 2)
I think you mean something like this
tflite_model_predictions1 = self.interpreter.get_tensor(self.output_details[0]['index'])
onnx_format_output = np.transpose(tflite_model_predictions1, (0, 3, 1, 2))
but this give error ValueError: axes don't match array
I don't know your output dimension, I just gave an example. If you are a 5-dimensional tensor, then it should be:
onnx_format_output = np.transpose(tflite_model_predictions1, (0, 4, 1, 2, 3))
Thanks again @MPolaris for your quick reply, same error and I don't know how many dimension as I'm beginning :-( , but when I test the original model it's working fine, if it's taking time to have full code with tflite model I'm open for freelance consulting, you can contact on [email protected], Thanks again for helping