flutter_vision icon indicating copy to clipboard operation
flutter_vision copied to clipboard

How to use yolo8x

Open AlexanderKozhevin opened this issue 2 years ago • 4 comments

I converted it to tflite. But it gives an error like this

Cannot copy to a TensorFlowLite tensor (serving_default_images:0) with 4915200 bytes from a Java Buffer with 23040 bytes.

AlexanderKozhevin avatar Aug 08 '23 03:08 AlexanderKozhevin

I checked the model from example and it has the following parameters

Input Details: [{'name': 'inputs_0', 'index': 0, 'shape': array([  1, 224, 224,   3], dtype=int32), 'shape_signature': array([  1, 224, 224,   3], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

Output Details: [{'name': 'Identity', 'index': 537, 'shape': array([   1,   84, 1029], dtype=int32), 'shape_signature': array([   1,   84, 1029], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

But my model has different

Input Details: [{'name': 'serving_default_images:0', 'index': 0, 'shape': array([  1,   3, 640, 640], dtype=int32), 'shape_signature': array([  1,   3, 640, 640], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

Output Details: [{'name': 'PartitionedCall:0', 'index': 572, 'shape': array([   1,   84, 8400], dtype=int32), 'shape_signature': array([   1,   84, 8400], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

All yolo models has image size 640px. How could it be 224?

AlexanderKozhevin avatar Aug 08 '23 04:08 AlexanderKozhevin

I believe you could change the input size while converting your model from .pt to .tflite model.export(format='tflite', imgsz=224)

habbas11 avatar Aug 22 '23 08:08 habbas11

I checked the model from example and it has the following parameters

Input Details: [{'name': 'inputs_0', 'index': 0, 'shape': array([  1, 224, 224,   3], dtype=int32), 'shape_signature': array([  1, 224, 224,   3], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

Output Details: [{'name': 'Identity', 'index': 537, 'shape': array([   1,   84, 1029], dtype=int32), 'shape_signature': array([   1,   84, 1029], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

But my model has different

Input Details: [{'name': 'serving_default_images:0', 'index': 0, 'shape': array([  1,   3, 640, 640], dtype=int32), 'shape_signature': array([  1,   3, 640, 640], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

Output Details: [{'name': 'PartitionedCall:0', 'index': 572, 'shape': array([   1,   84, 8400], dtype=int32), 'shape_signature': array([   1,   84, 8400], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

All yolo models has image size 640px. How could it be 224?

Have you achieved to fit the inputs size? I'm facing same problem right now. When exporting my yolov8n-seg to tflite I specified imgsz=640 but when running the model on frame it throws the same error. How did you solve this? Please. Thank you for your reply in advanced

guiiyanxD avatar Nov 25 '23 11:11 guiiyanxD

@guiiyanxD and @AlexanderKozhevin. How were you guys able to resolve the input tensor shape, because the model shape for the tensor here is [(tensor: float32[1,640,640,3]) while i have tensor: float32[1,3,640,640]. I don't know why it is so.

pwilliamspeniel avatar May 05 '24 21:05 pwilliamspeniel