tflite-support
tflite-support copied to clipboard
vision.ImageSegmenter fails when image array is F-continguous
When using numpy in python, the ordering of array elements in memory is handled under the hood and usually the user never has to worry about this. For instance, when I read an image file from disk parse it as a numpy array with image = np.asarray(Image.open('image.jpg'))
, it is loaded in C-contiguous format. If I decide to crop the image by slicing (crop = image[20:-20, 30:-3]
) I will get an F-contiguous array as a result.
If I try to run segmentation inference using the vision.ImageSegmenter.segment()
method with a C-contiguous array as input, it works fine. If the input image array is F-contiguous, the model happily returns garbage.
Versions: numpy: 1.23.1 python 3.9.10 tflife-support: 0.4.2 tensorflow: 2.8.3
Suggested fix: Add the following to the inference method:
if not image_array.data.c_contiguous:
image_array = np.ascontiguousarray(image_array)
Thanks for the suggestions!
We'll add this update to MediaPipe Tasks, which will have a preview release soon. Before that, please work around this issue by passing in C-contiguous array specifically to the Task Library API.