tflite_flutter_helper
tflite_flutter_helper copied to clipboard
[FEATURE?] Obtaining Image from TensorImage
First of all, thanks for a great work - this library seems really promising.
I have been wondering, would it be possible somehow to obtain image from a TensorImage
used for the recognition? Eg. for purposes of storing it or showing with the results after recognition completes. When I call TensorImage.image, I am hitting
StateError(
"TensorImage is holding a float-value image which is not able to convert a Image.");
Also, ImageConversion
class and it methods cannot handle float32 buffer conversion back to Image
.
In general, is there currently any way to obtain processed image from TensorImage
back as Image
or do you plan to add such a feature?
Temporarily duplicate this function https://github.com/am15h/tflite_flutter_helper/blob/master/lib/src/image/image_conversions.dart#L10-L46 (Only for your particular use case right now) you can remove the
if (buffer.getDataType() != TfLiteType.uint8) {
throw UnsupportedError(
"Converting TensorBuffer of type ${buffer.getDataType()} to Image is not supported yet.",
);
}
and things will work fine for you if your float data is between 0.0-255.0.
I will push an update for float values after generalizing with corner-cases.
Thanks for the answer. I appreciate your effort to help me with this. I have tried proposed approach.
If I got it right, I should
imageLib.Image _target = imageLib.Image(c.INPUT_SIZE,c.INPUT_SIZE);
_lastProcessedImage = ImageConversion.convertTensorBufferToImage(inputImage.getTensorBuffer(),_target);
It works fine, no exceptions raised, however I am getting just black square when I display it like Image.memory(imageLib.encodeJpg(_lastProcessedImage)
.
I have noticed that r, g, and b values are always -1,0 or 1 after List<int> rgbValues = buffer.getIntList()
. In original buffer, there are also values between -1.0 and 1.0, which is inline what model consumes. So it seems that getIntList just floors these values. So it makes no sense to use this as RGB values, right?
Ok, I have realized that in order to make this work I need to scale values from original float32 buffer to 0-255 using floor((x+1)*127.5). This works and it is quite fast. If you want, I will extend this internal helper and prepare utility method for handling float32 buffers.
class ImageConversion {
static Image convertTensorBufferToImage(TensorBuffer buffer, Image image) {
// if (buffer.getDataType() != TfLiteType.uint8) {
// throw UnsupportedError(
// "Converting TensorBuffer of type ${buffer.getDataType()} to Image is not supported yet.",
// );
// }
List<int> shape = buffer.getShape();
TensorImage.checkImageTensorShape(shape);
int h = shape[shape.length - 3];
int w = shape[shape.length - 2];
if (image.width != w || image.height != h) {
throw ArgumentError(
"Given image has different width or height ${[
image.width,
image.height
]} with the expected ones ${[w, h]}.",
);
}
List<double> rgbValues = buffer.getDoubleList();
assert(rgbValues.length == w * h * 3);
for (int i = 0, j = 0, wi = 0, hi = 0; j < rgbValues.length; i++) {
int r = ((rgbValues[j++] + 1) * 127.5).floor();
int g = ((rgbValues[j++] + 1) * 127.5).floor();
int b = ((rgbValues[j++] + 1) * 127.5).floor();
image.setPixelRgba(wi, hi, r,g, b);
wi++;
if (wi % w == 0) {
wi = 0;
hi++;
}
}
return image;
}
Glad to hear that your problem got resolved. Please open a pull request if you want to contribute. That would be very helpful