google_ml_kit_flutter
google_ml_kit_flutter copied to clipboard
Scaning realtime Text Recognition not return anything in some devices
Hi
i have follow example for scaning text recognition and implement to my flutter project
it's work on many devices such as : Samsung brand but it just return empty data on many devices such as : VIVO, XIAOMI
I have try update resolution ResolutionPreset.high to ResolutionPreset.medium but it still doesn't work
Please help me
Thank you
Facing the same issue. @rahman77889, did you find a solution to this?
Finally, I have solved this issue, by doing these 2 methods in one process :
The first method use the following default rules real-time scan until getting a result for compatible devices the second method uses timeout such as 4 second / 5 seconds then trigger shoot photo then try to call ocr with image path
the first method works for compatible devices the second method works for incompatible devices, but some times not match but it's acceptable than no result at all
because this issue has not been solved until now by @bharat-biradar
@rahman77889: @bharat-biradar and I have been maintaining this plugin for fun. It started as a side project. We do not have an ETA for your issue. You are welcome to dive into the code and send a PR with your contribution, this is an open source project. In particular your issue is challenging for us because we do not have all the testing devices you guys have and report issue with. Also, consider that this plugin is just a wrapper around the native APIs that Google provides. We are not doing any processing in the plugin, we are just wrapping the input you are passing and sending that to the Google APIs native layer using FlutterMethodChannels, waiting for the response, and sending it back to the Flutter layer for your convenience. When I see issues like the one you are reporting this is what I would do if I were in your shoes: I would run the native example app provided by Google and input the same data and confirm if I get the same result with the device. If you cannot detect even using their sample app, then it is something related to their native API and therefore @bharat-biradar and I would not be able to fix that, you need to report the issue to Google. If you can detect using their sample app, then it is more likely that the issue is in our plugin, then report back, clone our repo and start playing with our example app, debug, fix and send a PR with your contribution. Sorry we cannot help more. Google's Sample App: https://github.com/googlesamples/mlkit
I don't know if the problem is similar, but I'm having problems reading texts with the data coming straight from the camera's Stream in real devices.
In the emulator, everything works perfectly, I can get the texts from a photo or directly from the camera feed.
But on real devices, I can only read text from photos, when trying to read straight from camera feed, no text is detected.
I tested it on 2 different devices, a Moto G6 Plus with Android 9 and a Xiaomi 9T with Android 10
I tested your example app and the same problem happens. EDIT: I just tested with the Google's sample app version and it works perfectly and with an absurd performance on real devices.
My code:
///
///
///
Future<void> startMonitoring() async {
if (isStreaming.value) {
await cameraController.stopImageStream();
_isProcessing = false;
isStreaming.value = false;
} else {
await cameraController.startImageStream(
(CameraImage image) async {
isStreaming.value = true;
if (!_isProcessing) {
InputImage inputImage = _getStreamInputImage(image);
platesDetected.value = await _detectText(inputImage) ?? '';
}
},
);
}
}
///
///
///
InputImage _getStreamInputImage(CameraImage image) {
final Uint8List bytes = Uint8List.fromList(
image.planes.fold(
<int>[],
(List<int> previousValue, Plane element) => previousValue..addAll(element.bytes),
),
);
final Size imageSize = Size(image.width.toDouble(), image.height.toDouble());
final InputImageRotation imageRotation =
InputImageRotationValue.fromRawValue(cameraController.description.sensorOrientation) ??
InputImageRotation.rotation0deg;
final InputImageFormat inputImageFormat =
InputImageFormatValue.fromRawValue(image.format.raw) ?? InputImageFormat.nv21;
final List<InputImagePlaneMetadata> planeData = image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
);
},
).toList();
final InputImageData inputImageData = InputImageData(
size: imageSize,
imageRotation: imageRotation,
inputImageFormat: inputImageFormat,
planeData: planeData,
);
return InputImage.fromBytes(bytes: bytes, inputImageData: inputImageData);
}
///
///
///
Future<String?> _detectText(InputImage inputImage) async {
_isProcessing = true;
RecognizedText recognizedText = await _textRecognizer.processImage(inputImage);
textDetected.value = recognizedText.text;
print(recognizedText.text);
StringBuffer plates = StringBuffer();
print('TEXTO RECONHECIDO: ${recognizedText.text}');
for (TextBlock block in recognizedText.blocks) {
for (TextLine line in block.lines) {
for (TextElement element in line.elements) {
if (regexPlaca.hasMatch(element.text.replaceAll('-', '').replaceAll(':', '').replaceAll(' ', ''))) {
plates.writeln(element.text);
}
}
}
}
if (plates.isNotEmpty) {
_isProcessing = false;
return plates.toString();
}
plates.clear();
_isProcessing = false;
return null;
}
I found that if you set the camera's ResolutionPreset to LOW, the text recognition starts working perfectly on real devices, any value above low and it stops working.
cameraController = CameraController(
_cameras.first,
ResolutionPreset.low,
enableAudio: false,
);
@Alvarocda : I think this issue has to do with the camera
plugin, and not with our plugin. Unfortunately, the only way to get the camera feed in Flutter is using that plugin, if you know of another way, please share.
Also, refer to my comment in this issue: https://github.com/bharat-biradar/Google-Ml-Kit-plugin/issues/285#issuecomment-1192014260 , I shared more details there.
Yesterday I managed to make the lib recognize texts with the camera in higher resolution.
I noticed that when I use a quality above LOW, the GC works non-stop and the app crashes seconds after starting the recognition.
Looking on Google, I found that there is a flag that you need to put in the manifest file to allow the app to use a larger amount of memory.
android:largeHeap="true"
After putting this in my app's manifest, it started working without crashing but still not detecting the text with higher resolution qualities and even not recognizing the text, the app's performance was very bad.
So I decided to change the strategy, I created 2 CameraController objects, 1 to display on the screen for the user and another just to receive the images from the camera and process them.
By doing this, I noticed that the app became more fluid, the performance improved a lot, but no text recognition.
So, I decided to leave CameraController that displays the image to the user with low quality and the other object CameraController that was being used to process the images with high quality and without any explanation the app started to recognize texts.
Here is the code I am using and it works. https://github.com/Alvarocda/poc_flutter_ocr/blob/master/lib/controllers/home_controller.dart
@Alvarocda: Thanks for digging into this, could you implement those changes in our example app and a PR with your contributions?
Hey, all of this is most probably related to this issue with Camera: https://github.com/bharat-biradar/Google-Ml-Kit-plugin/issues/285