flutter-tflite
flutter-tflite copied to clipboard
Bad state: failed precondition
E/flutter (28220): #2 Interpreter.runInference interpreter.dart:204
E/flutter (28220): #3 Interpreter.runForMultipleInputs interpreter.dart:180
E/flutter (28220): #4 Interpreter.run interpreter.dart:172
E/flutter (28220): #5 _DetectorServer._runInference detector_service.dart:406
E/flutter (28220): #6 _DetectorServer.analyseImage detector_service.dart:303
E/flutter (28220): #7 _DetectorServer._convertCameraImage. <anonymous closure> detector_service.dart:261
E/flutter (28220): <asynchronous suspension>
I/GRALLOC (28220): LockFlexLayout: baseFormat: 11, yStride: 320, ySize: 76800, uOffset: 76800, uStride: 320
V/AudioManager(28220): playSoundEffect effectType: 0
V/AudioManager(28220): querySoundEffectsEnabled...
I/GRALLOC (28220): LockFlexLayout: baseFormat: 11, yStride: 320, ySize: 76800, uOffset: 76800, uStride: 320
I/Camera (28220): dispose
I/Camera (28220): close
I used mediapipe's Object Detection Model Customization to generate a tflite file, and the output format is.
The object detection model I trained myself has only one class for detecting a ball.
DetectionResult(detections=[
Detection(
bounding_box=BoundingBox(origin_x=79, origin_y=28, width=121, height=115),
categories=[Category(
index=None,
score=0.9868238568305969,
display_name=None,
category_name='Ball')], keypoints=[])])
How should I modify the format of the output to solve this issue?
/// Object detection main function
List<List<Object>> _runInference(
List<List<List<num>>> imageMatrix,
) {
dev.log('Running inference...');
// Set input tensor [1, 300, 300, 3]
final input = [imageMatrix];
// Set output tensor
// Locations: [1, 10, 4]
// Classes: [1, 10],
// Scores: [1, 10],
Number of detections: [1]
final output = {
0: [List<List<num>>.filled(10, List<num>.filled(4, 0))],
1: [List<num>.filled(10, 0)],
2: [List<num>.filled(10, 0)],
3: [0.0],
};
_interpreter!. runForMultipleInputs([input], output);
return output.values.toList();
}
input and output shape
print('>>>
input shape: ${_interpreter!. getInputTensor(0).shape},
type: ${_interpreter!. getInputTensor(0).type}');
I/flutter (30633): >>>input shape: [1, 256, 256, 3], type: float32
print('>>>
output shape: ${_interpreter!. getOutputTensor(0).shape},
type: ${_interpreter!. getOutputTensor(0).type}');
I/flutter (30633): >>>output shape: [1, 12276, 4], type: float32
I am not completly sure, your expected input shape is [1, 256, 256, 3]
but according to this one comment your input shape is [1, 300, 300, 3]
.
Also the same goes for the output shape.
I am not completly sure, your expected input shape is
[1, 256, 256, 3]
but according to this one comment your input shape is[1, 300, 300, 3]
.Also the same goes for the output shape.
Thank you, I have already noticed this. I have also adjusted the 'input' and 'output' accordingly, but I'm still encountering the same error.
List<List<Object>> _runInference(
List<List<List<num>>> imageMatrix,
) {
dev.log('Running inference...');
// Set input tensor [1, 256, 256, 3]
final input = [imageMatrix];
final outputs = [List<List<num>>.filled(12276, List<num>.filled(4, 0))];
_interpreter!.run([input], outputs);
return output.values.toList();
}
Can you print out _interpreter!.getInputTensors()
and _interpreter!.getOutputTensors()
.
Maybe your model takes multiple tensors as in- or output tensors.
Can you print out
_interpreter!.getInputTensors()
and_interpreter!.getOutputTensors()
.Maybe your model takes multiple tensors as in- or output tensors.
I/flutter (13437): =====================inputTensors=====================
I/flutter (13437): Tensor{_tensor: Pointer: address=0x7defb83b80, name: serving_default_inputs:0, type: float32, shape: [1, 256, 256, 3], data: 786432}
[log] Running inference...
I/flutter (13437): =====================outputTensors=====================
I/flutter (13437): Tensor{_tensor: Pointer: address=0x7defb95bd0, name: StatefulPartitionedCall:0, type: float32, shape: [1, 12276, 4], data: 196416}
I/flutter (13437): Tensor{_tensor: Pointer: address=0x7defb95af0, name: StatefulPartitionedCall:1, type: float32, shape: [1, 12276, 2], data: 98208}
List<List<Object>> _runInference(
List<List<List<num>>> imageMatrix,
) {
dev.log('Running inference...');
final input = [imageMatrix];
final output = {
0: [List<List<num>>.filled(10, List<num>.filled(4, 0))],
1: [List<num>.filled(10, 0)],
2: [List<num>.filled(10, 0)],
3: [0.0],
};
print('=====================inputTensors=====================');
for (var input in _interpreter!.getInputTensors()) {
print('$input');
}
print('=====================outputTensors=====================');
for (var output in _interpreter!.getOutputTensors()) {
print('$output');
}
_interpreter!.runForMultipleInputs([input], output);
return output.values.toList();
}
There you can see that you have two different output tensors.
Your desired input shape is [1, 256, 256, 3]
and your desired output shapes are [1, 12276, 4]
and [1, 12276, 2]
.
This should do the trick:
final output = {
0: [List<List<num>>.filled(12276, List<num>.filled(4, 0))],
1: [List<List<num>>.filled(12276, List<num>.filled(2, 0))]
};
I'm facing same issue.
`class Classifier {
Interpreter _interpreter;
List<List
static const String MODEL_FILE_NAME = "assets/model.tflite"; static const int INPUT_SIZE = 150; // Updated to match input shape static const double THRESHOLD = 0.5; ImageProcessor imageProcessor; int padSize; List<String> _labels;
Classifier({Interpreter interpreter, List<String> labels}) { loadModel(interpreter: interpreter); }
void loadModel({Interpreter interpreter}) async { try { _interpreter = interpreter ?? await Interpreter.fromAsset( MODEL_FILE_NAME, options: InterpreterOptions()..threads = 4, );
var outputTensors = _interpreter.getOutputTensors();
_outputShapes = outputTensors.map((tensor) => tensor.shape).toList();
var inputTensors = interpreter.getInputTensors();
for (var inputTensor in inputTensors) {
debugPrint('Input Tensor Shape: ${inputTensor.shape}');
}
for (var outputTensor in outputTensors) {
debugPrint('Output Tensor Shape: ${outputTensor.shape}');
}
debugPrint('OUTPUT SHAPES:: $_outputShapes');
} catch (e) {
print("Error while creating interpreter: $e");
}
}
TensorImage getProcessedImage(TensorImage inputImage) { padSize = max(inputImage.height, inputImage.width); if (imageProcessor == null) { imageProcessor = ImageProcessorBuilder() .add(ResizeWithCropOrPadOp(padSize, padSize)) .add(ResizeOp(INPUT_SIZE, INPUT_SIZE, ResizeMethod.bilinear)) .build(); } inputImage = imageProcessor.process(inputImage); return inputImage; }
Map<String, dynamic> predict(imageLib.Image image) { var predictStartTime = DateTime.now().millisecondsSinceEpoch;
if (_interpreter == null) {
debugPrint("Interpreter not initialized");
return null;
}
var preProcessStart = DateTime.now().millisecondsSinceEpoch;
// Create TensorImage from image
TensorImage inputImage = TensorImage();
inputImage.loadImage(image);
// Pre-process TensorImage
inputImage = getProcessedImage(inputImage);
var preProcessElapsedTime =
DateTime.now().millisecondsSinceEpoch - preProcessStart;
var outputShape = _outputShapes.first;
// TensorBuffer for output tensor
TensorBuffer outputBuffer = TensorBufferFloat(outputShape);
// Run inference
_interpreter.run(inputImage.buffer, {0: outputBuffer.buffer});
var inferenceTimeElapsed =
DateTime.now().millisecondsSinceEpoch - preProcessStart;
List<double> outputData = outputBuffer.getDoubleList();
var predictElapsedTime =
DateTime.now().millisecondsSinceEpoch - predictStartTime;
print('Output Data: $outputData');
return {
"recognitions": outputData,
"stats": Stats(
totalPredictTime: predictElapsedTime,
inferenceTime: inferenceTimeElapsed,
preProcessingTime: preProcessElapsedTime,
),
};
}
Interpreter get interpreter => _interpreter;
List<String> get labels => _labels; }` Input Tensors
Tensor{_tensor: Pointer: address=0x6f1bef4000, name: serving_default_input_1:0, type: float32, shape: [1, 1, 1, 3], data: 12}
Output Tensors
Tensor{_tensor: Pointer: address=0x6f1bef5340, name: StatefulPartitionedCall:0, type: float32, shape: [1, 0, 0, 512], data: 0}
@shoaibakhtar57 try to just pass outputBuffer.buffer
instead of {0: outputBuffer.buffer}
.
@gregorscholz I tried _interpreter.run(inputImage.buffer, outputBuffer.buffer);
but still getting same error
[ERROR:flutter/runtime/dart_isolate.cc(1098)] Unhandled exception: E/flutter (30892): Bad state: failed precondition
Can you maybe just print out _interpreter.getInputTensors()
abd _interpreter.getOutoutTensors()
?
@gregorscholz
I/flutter (28993): =====================inputTensors===================== I/flutter (28993): Input Tensor Shape: Tensor{_tensor: Pointer: address=0x6f9e1f1800, name: serving_default_input_1:0, type: float32, shape: [1, 1, 1, 3], data: 12} I/flutter (28993): =====================outputTensors===================== I/flutter (28993): Output Tensor Shape: Tensor{_tensor: Pointer: address=0x6f9e1f2b40, name: StatefulPartitionedCall:0, type: float32, shape: [1, 0, 0, 512], data: 0}
Can you maybe just print out
_interpreter.getInputTensors()
abd_interpreter.getOutoutTensors()
?
Did you check if your input shape and output shape are correct? I did not use a ImageProcessor before, so i dont know how that works
Yes, I've checked it again and it is correct
I think there could also be a problem with your trained model, the input shape is a bit strange. Maybe check your modal in the tool linked in this thread.
I think there could also be a problem with your trained model, the input shape is a bit strange. Maybe check your modal in the tool linked in this thread.
Hi, my input and output is correct, but idk i have the error => I/flutter (19531): Error Bad state: failed precondition, how can i know more information about the error and not only failed precondition. However how can i contact u? I think that u can solve this issue
I think there could also be a problem with your trained model, the input shape is a bit strange. Maybe check your modal in the tool linked in this thread.
Hi, my input and output is correct, but idk i have the error => I/flutter (19531): Error Bad state: failed precondition, how can i know more information about the error and not only failed precondition. However how can i contact u? I think that u can solve this issue
print out interpreter.getInputTensors()
and interpreter.getOutputTensors()
. Also can you maybe show me your input and output ?
I think there could also be a problem with your trained model, the input shape is a bit strange. Maybe check your modal in the tool linked in this thread.
Sorry for late reply. This is exactly same input and output shapes as I'm getting in python from my trained model and with same input and output shapes it works perfectly fine in Python.
Can you maybe provide the model to download somewhere? Then i could test it myself, else i dont know how to help you.
Can you maybe provide the model to download somewhere? Then i could test it myself, else i dont know how to help you.
Yes sure. You can download my model from here: https://easyupload.io/5rhthx
Thank you.
Ok, so I did some debugging and can show you my results.
I loaded the model like this using the interpreter options
Future<void> _loadModel() async {
log('Loading interpreter options...');
final interpreterOptions = InterpreterOptions();
// Use XNNPACK Delegate
if (Platform.isAndroid) {
interpreterOptions.addDelegate(XNNPackDelegate());
}
// Use Metal Delegate
if (Platform.isIOS) {
interpreterOptions.addDelegate(GpuDelegate());
}
log('Loading interpreter...');
_interpreter =
await Interpreter.fromAsset(_modelPath, options: interpreterOptions);
}
and i get the same error as you described.
When i comment out the part with the interpreter options and initialize the interpreter wihtout them, i get the error Tensor data is null
which gets only thrown here.
Sorry, i dont really know how to help you.
Ok, so I did some debugging and can show you my results.
I loaded the model like this using the interpreter options
Future<void> _loadModel() async { log('Loading interpreter options...'); final interpreterOptions = InterpreterOptions(); // Use XNNPACK Delegate if (Platform.isAndroid) { interpreterOptions.addDelegate(XNNPackDelegate()); } // Use Metal Delegate if (Platform.isIOS) { interpreterOptions.addDelegate(GpuDelegate()); } log('Loading interpreter...'); _interpreter = await Interpreter.fromAsset(_modelPath, options: interpreterOptions); }
and i get the same error as you described.
When i comment out the part with the interpreter options and initialize the interpreter wihtout them, i get the error
Tensor data is null
which gets only thrown here.So i think the problem is still with the model with the shape of
[1, 0, 0, 512]
but still i am not 100% sure.
Thank you so much for your time.
If there is an issue with the model then why I'm able to run and get results perfectly fine in python? You can see the results with same model in python here
I dont know. Here data
gets set to null because bytes
has the size 0
i think.
/// Returns a pointer to the underlying data buffer.
///
/// NOTE: The result may be null if tensors have not yet been allocated, e.g.,
/// if the Tensor has just been created or resized and `TfLiteAllocateTensors()`
/// has yet to be called, or if the output tensor is dynamically sized and the
/// interpreter hasn't been invoked.
ffi.Pointer<ffi.Void> TfLiteTensorData(
ffi.Pointer<TfLiteTensor> tensor,
) {
return _TfLiteTensorData(
tensor,
);
}
here in the docs of the called functions which returns null
it says it might not be allocated, tried it before. Did also not work...
I am sorry, i dont know how i can help you
@gregorscholz Thank you so much brother. I appreciate it.
@shoaibakhtar57 @CusterFun : Did you resolved this issue ? I had same issue with you here, and I have no any ideas to fix it. Thank you
@shoaibakhtar57 @CusterFun : Did you resolved this issue ? I had same issue with you here, and I have no any ideas to fix it. Thank you
Not yet. I'm still searching for solution
I got this problem with this when our model input changed from [33,2] to [33,5]. I was able to resolve it by fixing the input shape.