tflite_flutter_plugin
tflite_flutter_plugin copied to clipboard
runForMultipleInputs always returns Bad state: failed precondition
Hello,
I'm trying to get predictions with a converted custom YOLOv4 model, and I'm always getting Bad state: failed precondition
when running runForMultipleInputs
on inputs and outputs. I already read some older issues but non of them helped me fixing the issue.
My code:
Map<String, dynamic> predict(imageLib.Image image) {
var predictStartTime = DateTime.now().millisecondsSinceEpoch;
if (_interpreter == null) {
return null;
}
var preProcessStart = DateTime.now().millisecondsSinceEpoch;
// Initliazing TensorImage as the needed model input type
// of TfLiteType.float32. Then, creating TensorImage from image
TensorImage inputImage = TensorImage(TfLiteType.float32);
inputImage.loadImage(image);
TensorImage original = TensorImage(TfLiteType.float32);
original.loadImage(image);
// Do not use static methods, fromImage(Image) or fromFile(File),
// of TensorImage unless the desired input TfLiteDataType is Uint8.
// Create TensorImage from image
//TensorImage inputImage = TensorImage.fromImage(image);
// Pre-process TensorImage
inputImage = getProcessedImage(inputImage);
//getProcessedImage(inputImage);
var preProcessElapsedTime =
DateTime.now().millisecondsSinceEpoch - preProcessStart;
// TensorBuffers for output tensors
TensorBuffer outputLocations = TensorBufferFloat(
_outputShapes[0]); // The location of each detected object
List<List<List<double>>> outputClassScores = new List.generate(
_outputShapes[1][0],
(_) => new List.generate(_outputShapes[1][1],
(_) => new List.filled(_outputShapes[1][2], 0.0),
growable: false),
growable: false);
// Inputs object for runForMultipleInputs
// Use [TensorImage.buffer] or [TensorBuffer.buffer] to pass by reference
List<Object> inputs = [inputImage.buffer];
// Outputs map
Map<int, Object> outputs = {
0: outputLocations.buffer,
1: outputClassScores,
};
var inferenceTimeStart = DateTime.now().millisecondsSinceEpoch;
// print(inputs[0].runtimeType);
// print(inputs[0].toString());
print(_interpreter.getInputTensors());
print(_interpreter.getOutputTensors());
try {
// run inference
_interpreter.runForMultipleInputs(inputs, outputs);
} catch (e) {
print(">>>>>>>>>>>> ERROR:" + e.toString());
List<Recognition> recognitionsNMS = [];
var predictElapsedTime =
DateTime.now().millisecondsSinceEpoch - predictStartTime;
var inferenceTimeElapsed =
DateTime.now().millisecondsSinceEpoch - inferenceTimeStart;
return {
"recognitions": recognitionsNMS,
"stats": Stats(
totalPredictTime: predictElapsedTime,
inferenceTime: inferenceTimeElapsed,
preProcessingTime: preProcessElapsedTime)
};
}
It fails in tensor.dart setTo
method of the inputTensors
:
checkState(tfLiteTensorCopyFromBuffer(_tensor, ptr.cast<Void>(), size) ==
TfLiteStatus.ok);
The original code it at https://github.com/TexMexMax/object_detection_flutter which I changed to get it working with a recent flutter version and this plugin. This also meant using ffi for memory allocation, but I don't see how that could impact this.
Do you have any clue as to what may be wrong?
Kind regards, Richard
Hey @rj76! Check my comment on this issue: https://github.com/am15h/tflite_flutter_plugin/issues/133#issuecomment-1118419145
In short:
- Make sure you have the same preprocessing as in your python code.
- The
Void
pointer really should beNativeType
fromdart:ffi
to circumvent runtime errors. - Use
tfLiteTensorByteSize(_tensor)
to check the size of your tensor and if it is a multiple ofsize
to further debug your issue.
Hey @rj76! Check my comment on this issue: #133 (comment)
In short:
- Make sure you have the same preprocessing as in your python code.
- The
Void
pointer really should beNativeType
fromdart:ffi
to circumvent runtime errors.- Use
tfLiteTensorByteSize(_tensor)
to check the size of your tensor and if it is a multiple ofsize
to further debug your issue.
I am facing a similar issue. I am mostly using the code from object detection blog post, with my own old tflite model I trained two years back. The problem is, I used another tflite plugin for flutter, and now I don't know where to begin in this one. After finally getting as far as using the model, I get this error. I trained my model on the SSD MobileNet model.
I have also tried your grayscale fix, but I still get this issue.
Hope we can figure out something soon!
Taking a photo
XFile imageFile = await _controller.takePicture();
img.Image? capturedImage = img.decodeImage(await imageFile.readAsBytes());
img.Image orientedImage = img.bakeOrientation(capturedImage!);
await File(imageFile.path).writeAsBytes(img.encodeJpg(orientedImage));
return imageFile.path;
Running object detection
_findObjectOnImage(String imagePath) async {
final interpreter = await Interpreter.fromAsset('tflite/model.tflite');
// * Read labels from assets/tflite/labels.txt
final labels = await rootBundle
.load('assets/tflite/labels.txt')
.then((value) => value.toString().split('\n'));
Classifier classifier =
Classifier(interpreter: interpreter, labels: labels);
img.Image? image = img.decodeImage(File(imagePath).readAsBytesSync());
final output = classifier.predict(image!);
if (output != null && output.isNotEmpty) {
return output;
}
}
Classifier
import 'dart:math';
import 'package:image/image.dart' as img;
import 'package:flutter/rendering.dart';
import 'package:tflite_flutter/tflite_flutter.dart';
import 'package:tflite_flutter_helper/tflite_flutter_helper.dart';
import 'recognition.dart';
import 'stats.dart';
class Classifier {
/// Input size of image (height = width = 320)
static const int inputSize = 320;
static const int numResults = 1;
static const double resultTreshold = 0.9;
/// Instance of Interpreter
late Interpreter _interpreter;
/// Labels file loaded as List
late List<String> _labels;
/// Shapes of output tensors
late List<List<int>> _outputShapes;
/// Types of output tensors
late List<TfLiteType> _outputTypes;
/// [ImageProcessor] used to pre-process the image
late ImageProcessor imageProcessor;
/// Padding the image to transform into square
late int padSize;
Classifier({
required Interpreter interpreter,
required List<String> labels,
}) {
loadModel(interpreter: interpreter);
loadLabels(labels: labels);
}
/// Loads interpreter from asset
void loadModel({required Interpreter interpreter}) async {
try {
_interpreter = interpreter;
var outputTensors = _interpreter.getOutputTensors();
_outputShapes = [];
_outputTypes = [];
for (var tensor in outputTensors) {
_outputShapes.add(tensor.shape);
_outputTypes.add(tensor.type);
}
} catch (e) {
debugPrint("Error while creating interpreter: $e");
}
}
/// Loads labels from assets
void loadLabels({required List<String> labels}) async {
try {
_labels = labels;
} catch (e) {
debugPrint("Error while loading labels: $e");
}
}
TensorImage getProcessedImage(TensorImage inputImage) {
padSize = max(inputImage.height, inputImage.width);
// create ImageProcessor
imageProcessor = ImageProcessorBuilder()
// Padding the image
.add(ResizeWithCropOrPadOp(padSize, padSize))
// Resizing to input size
.add(ResizeOp(inputSize, inputSize, ResizeMethod.BILINEAR))
// Gray scale
.add(TransformToGrayscaleOp())
.build();
inputImage = imageProcessor.process(inputImage);
return inputImage;
}
/// Gets the interpreter instance
Interpreter get interpreter => _interpreter;
/// Gets the loaded labels
List<String> get labels => _labels;
/// Runs object detection on the input image
Map<String, dynamic>? predict(img.Image image) {
var predictStartTime = DateTime.now().millisecondsSinceEpoch;
var preProcessStart = DateTime.now().millisecondsSinceEpoch;
// Create TensorImage from image
TensorImage inputImage = TensorImage.fromImage(image);
// Pre-process TensorImage
inputImage = getProcessedImage(inputImage);
var preProcessElapsedTime =
DateTime.now().millisecondsSinceEpoch - preProcessStart;
// TensorBuffers for output tensors
TensorBuffer outputLocations = TensorBufferFloat(_outputShapes[0]);
TensorBuffer outputClasses = TensorBufferFloat(_outputShapes[1]);
TensorBuffer outputScores = TensorBufferFloat(_outputShapes[2]);
TensorBuffer numLocations = TensorBufferFloat(_outputShapes[3]);
// Inputs object for runForMultipleInputs
// Use [TensorImage.buffer] or [TensorBuffer.buffer] to pass by reference
List<Object> inputs = [inputImage.buffer];
// Outputs map
Map<int, Object> outputs = {
0: outputLocations.buffer,
1: outputClasses.buffer,
2: outputScores.buffer,
3: numLocations.buffer,
};
var inferenceTimeStart = DateTime.now().millisecondsSinceEpoch;
// run inference
_interpreter.runForMultipleInputs(inputs, outputs);
//_interpreter.run(inputs, outputs);
var inferenceTimeElapsed =
DateTime.now().millisecondsSinceEpoch - inferenceTimeStart;
// Maximum number of results to show
int resultsCount = min(numResults, numLocations.getIntValue(0));
// Using labelOffset = 1 as ??? at index 0
int labelOffset = 1;
// Using bounding box utils for easy conversion of tensorbuffer to List<Rect>
List<Rect> locations = BoundingBoxUtils.convert(
tensor: outputLocations,
valueIndex: [1, 0, 3, 2],
boundingBoxAxis: 2,
boundingBoxType: BoundingBoxType.BOUNDARIES,
coordinateType: CoordinateType.RATIO,
height: inputSize,
width: inputSize,
);
List<Recognition> recognitions = [];
for (int i = 0; i < resultsCount; i++) {
// Prediction score
var score = outputScores.getDoubleValue(i);
// Label string
var labelIndex = outputClasses.getIntValue(i) + labelOffset;
var label = _labels.elementAt(labelIndex);
if (score > resultTreshold) {
// inverse of rect
// [locations] corresponds to the image size 300 X 300
// inverseTransformRect transforms it our [inputImage]
Rect transformedRect = imageProcessor.inverseTransformRect(
locations[i], image.height, image.width);
recognitions.add(
Recognition(i, label, score, transformedRect),
);
}
}
var predictElapsedTime =
DateTime.now().millisecondsSinceEpoch - predictStartTime;
return {
"recognitions": recognitions,
"stats": Stats(
totalPredictTime: predictElapsedTime,
inferenceTime: inferenceTimeElapsed,
preProcessingTime: preProcessElapsedTime)
};
}
}
@zbejas Let me see if I can provide a PR. I'm still waiting for the maintainer to chime in...
@tahesse I have already added the export line, but adding grayscale didnt seem to help.
You can check if I have added it correctly to the classifier tho.
Other than this, those few build issues in Flutter 3.0. were easy to fix for the time being.
@zbejas Read my whole comment; particularly EDIT2. There are actually ffi related errors in the source code which I had to patch.
Not in your source code but the libs source code.
When I create a PR you'll be able to install from that branch instead.
I fixed this by downgrading tflite_flutter_helper: 0.3.1 to tflite_flutter_helper: 0.2.1 and this error went away for me...
I fixed this by downgrading tflite_flutter_helper: 0.3.1 to tflite_flutter_helper: 0.2.1 and this error went away for me...
If i did that, I would have to change a bunch of other versions (image.dart, flutter_launcher_icons.dart) and some stuff in my code. Did anyone else resolve the issue like this?
@SchulzKilian all I can say is check my comment https://github.com/am15h/tflite_flutter_plugin/issues/133#issuecomment-1118419145 and create your own fork from this repo. It seems to be unmaintained so it doesn't really matter at this point.