google_ml_kit_flutter
google_ml_kit_flutter copied to clipboard
Face detection not working on front camera (iOS)
Face detection not working on front camera (iOS)
I am using the example app, and when I try to take a picture using my front camera, the faces found is always 0. If use the back camera, everything works.
Steps to reproduce the behavior:
- Go to 'Face detection'
- Click on the bottom left icon, then on 'Take a picture'
- Take a selfie with the front camera
- See error
Platform:
- OS: iOS
- Device: iPhone 12 Pro & 14
- OS: iOS 17.1.1
- Flutter Version: 3.16.7
- Plugin version: 0.9.0
EDIT: it's actually working sometimes. On the UI of the front camera, there is little button to do a little zoom in/out. If I take the picture with the zoom out the faces are detected, if the camera has that little zoom in (which is the default setting), it doesn't work
For face recognition, you should use an image with dimensions of at least 480x360 pixels. For ML Kit to accurately detect faces, input images must contain faces that are represented by sufficient pixel data. In general, each face you want to detect in an image should be at least 100x100 pixels. source: https://developers.google.com/ml-kit/vision/face-detection/android#input-image-guidelines
When the front camera is in zoom-in mode, the picture has a dimension of 2316×3088, and my face is NOT detected. When the camera is in zoom-out mode, the picture has a dimension of 3024×4032, and my face is detected.
have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16
have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16
no
This issue is stale because it has been open for 30 days with no activity.
So I have narrowed down the issue to these points:
- Real-time face detection works in iOS when the InputImage has bgra8888 image format group,
InputImage.fromBytesfactory constructor is used (According to the example app) - Face detection works fine with image selected from iOS Photos App,
InputImage.fromFilePathfactory constructor is used - Face detection does not work with Image captured in iOS from camera plugin using the
.takePicturefunction,InputImage.fromFilePathfactory constructor is used - Face detection works when the Image captured in iOS from camera plugin using the
.takePicturefunction is saved to the Photos App in iOS and using the Photo App image path,InputImage.fromFilePathfactory constructor is used
Overall this issue is related with File Path in iOS, maybe the library is not able to load the UIImage in Swift code when the image path is given from the captured image using camera plugin, but it is able to load the UIImage when the image path is given from Photos App
@fbernaly can you remove the stale label from this issue and look into this issue?
@TecHaxter : I have removed the stale label, but I do not have bandwidth to work on this. Feel free to fork the repo and submit your contribution. I will review your PR ASAP and release a new version ASAP.
try this
`Future
List<Face> faces = await processPickedFile(imageFile);
if (faces.isEmpty) {
return false;
}
double screenWidth = MediaQuery.of(context).size.width;
double screenHeight = MediaQuery.of(context).size.height;
final radius = screenWidth * 0.35;
Rect rectOverlay = Rect.fromLTRB(
screenWidth / 2 - radius,
screenHeight / 3.5 - radius,
screenWidth / 2 + radius,
screenHeight / 2.5 + radius,
);
for (Face face in faces) {
final Rect boundingBox = face.boundingBox;
if (boundingBox.bottom < rectOverlay.top ||
boundingBox.top > rectOverlay.bottom ||
boundingBox.right < rectOverlay.left ||
boundingBox.left > rectOverlay.right) {
return false;
}
}
return true;
} catch (e) {
return false;
}
}
processPickedFile(XFile pickedFile) async { final path = pickedFile.path;
InputImage inputImage;
if (Platform.isIOS) {
final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
if (iosImageProcessed == null) {
return [];
}
inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');
List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;
}
Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();
final imglib.Image? capturedImage =
imglib.decodeImage(await File(pickedFile.path).readAsBytes());
if (capturedImage == null) {
return null;
}
final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);
File imageToBeProcessed = await File('$path/$filename')
.writeAsBytes(imglib.encodeJpg(orientedImage));
return imageToBeProcessed;
}
return null;
}`
Dont forget import this
import 'package:image/image.dart' as imglib;
try this
`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }
List<Face> faces = await processPickedFile(imageFile); if (faces.isEmpty) { return false; } double screenWidth = MediaQuery.of(context).size.width; double screenHeight = MediaQuery.of(context).size.height; final radius = screenWidth * 0.35; Rect rectOverlay = Rect.fromLTRB( screenWidth / 2 - radius, screenHeight / 3.5 - radius, screenWidth / 2 + radius, screenHeight / 2.5 + radius, ); for (Face face in faces) { final Rect boundingBox = face.boundingBox; if (boundingBox.bottom < rectOverlay.top || boundingBox.top > rectOverlay.bottom || boundingBox.right < rectOverlay.left || boundingBox.left > rectOverlay.right) { return false; } } return true; } catch (e) { return false; }}
processPickedFile(XFile pickedFile) async { final path = pickedFile.path;
InputImage inputImage; if (Platform.isIOS) { final File? iosImageProcessed = await bakeImageOrientation(pickedFile); if (iosImageProcessed == null) { return []; } inputImage = InputImage.fromFilePath(iosImageProcessed.path); } else { inputImage = InputImage.fromFilePath(path); } print('INPUT IMAGE PROCESSED: ${inputImage.filePath}'); List<Face> faces = await faceDetector.processImage(inputImage); print('Found ${faces.length} faces for picked file'); return faces;}
Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();
final imglib.Image? capturedImage = imglib.decodeImage(await File(pickedFile.path).readAsBytes()); if (capturedImage == null) { return null; } final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage); File imageToBeProcessed = await File('$path/$filename') .writeAsBytes(imglib.encodeJpg(orientedImage)); return imageToBeProcessed; } return null;}`
Dont forget import this
import 'package:image/image.dart' as imglib;
Thanks. It's working to me.
try this
`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }
List<Face> faces = await processPickedFile(imageFile); if (faces.isEmpty) { return false; } double screenWidth = MediaQuery.of(context).size.width; double screenHeight = MediaQuery.of(context).size.height; final radius = screenWidth * 0.35; Rect rectOverlay = Rect.fromLTRB( screenWidth / 2 - radius, screenHeight / 3.5 - radius, screenWidth / 2 + radius, screenHeight / 2.5 + radius, ); for (Face face in faces) { final Rect boundingBox = face.boundingBox; if (boundingBox.bottom < rectOverlay.top || boundingBox.top > rectOverlay.bottom || boundingBox.right < rectOverlay.left || boundingBox.left > rectOverlay.right) { return false; } } return true; } catch (e) { return false; }}
processPickedFile(XFile pickedFile) async { final path = pickedFile.path;
InputImage inputImage; if (Platform.isIOS) { final File? iosImageProcessed = await bakeImageOrientation(pickedFile); if (iosImageProcessed == null) { return []; } inputImage = InputImage.fromFilePath(iosImageProcessed.path); } else { inputImage = InputImage.fromFilePath(path); } print('INPUT IMAGE PROCESSED: ${inputImage.filePath}'); List<Face> faces = await faceDetector.processImage(inputImage); print('Found ${faces.length} faces for picked file'); return faces;}
Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();
final imglib.Image? capturedImage = imglib.decodeImage(await File(pickedFile.path).readAsBytes()); if (capturedImage == null) { return null; } final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage); File imageToBeProcessed = await File('$path/$filename') .writeAsBytes(imglib.encodeJpg(orientedImage)); return imageToBeProcessed; } return null;}`
Dont forget import this
import 'package:image/image.dart' as imglib;
Thank you, this does work for me
Thank you, This works on the IOS @itzmail
InputImage inputImage; if (Platform.isIOS) { final File? iosImageProcessed = await bakeImageOrientation(pickedFile); if (iosImageProcessed == null) { return []; } inputImage = InputImage.fromFilePath(iosImageProcessed.path); } else { inputImage = InputImage.fromFilePath(path); } print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');
List<Face> faces = await faceDetector.processImage(inputImage); print('Found ${faces.length} faces for picked file'); return faces; }
Thanks @itzmail this worked like a magi, made it very smooth without stress to detect image
Future<File?> bakeImageOrientation(XFile pickedFile) async {
if (Platform.isIOS) {
final directory = await getApplicationDocumentsDirectory();
final path = directory.path;
final filename = DateTime.now().millisecondsSinceEpoch.toString();
final imglib.Image? capturedImage =
imglib.decodeImage(await File(pickedFile.path).readAsBytes());
if (capturedImage == null) {
return null;
}
final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);
File imageToBeProcessed = await File('$path/$filename')
.writeAsBytes(imglib.encodeJpg(orientedImage));
return imageToBeProcessed;
}
return null;
}
@giordy16 This solution worked for me. I'll leave the solution here in case someone else has the same issue! thanks a lot for mentioning this issue.
import 'dart:io'; import 'package:google_ml_kit/google_ml_kit.dart'; import 'package:image/image.dart' as imglib; import 'package:image_picker/image_picker.dart'; import 'package:path_provider/path_provider.dart'; getImageAndDetectFaces() async { try { XFile imageFile = await _imagePicker.pickImage( source: ImageSource.camera, preferredCameraDevice: CameraDevice.front, ); if (imageFile == null) return; if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); } List<Face> faces = await processPickedFile(imageFile); if (faces.isEmpty) { throw EvomException('Nenhum rosto foi detectado na imagem!'); } return faces; } catch (e) { throw Exception('$e'); } } processPickedFile(XFile pickedFile) async { final path = pickedFile?.path ?? null; if (path == null) { throw EvomException('Imagem não encontrada!'); } InputImage inputImage; if (Platform.isIOS) { final File iosImageProcessed = await backImageOrientation(pickedFile); inputImage = InputImage.fromFilePath(iosImageProcessed.path); } else { inputImage = InputImage.fromFilePath(path); } print( 'INPUT IMAGE PROCESSED: ${inputImage.filePath} - ${inputImage.imageType}'); List<Face> faces = await _faceDetector.processImage(inputImage); print('Found ${_faces.length} faces for picked file'); return faces; } bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString(); final imglib.Image capturedImage = imglib.decodeImage(await File(pickedFile.path).readAsBytes()); final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage); final imageToBeProcessed = await File('$path/$filename') .writeAsBytes(imglib.encodeJpg(orientedImage)); return imageToBeProcessed; } return null; }
The above solution has been incredibly helpful in resolving the zoomed-in Front Camera selfie issue on iOS devices and improving overall face detection accuracy. I've implemented this approach with some modifications to fit my specific use case, and I wanted to share how it benefited my project in case it helps others:
-
Zoomed-out Selfies: The
bakeImageOrientationfunction effectively addressed the IOS front camera default zoom-in issue, resulting in properly framed selfies. -
Improved Face Detection: By ensuring correct image orientation, the face detection accuracy significantly improved for both selfies and uploaded images.
-
Consistency in Image Processing: I applied the same image processing technique to both selfie capture and image uploads from the gallery. This consistency was crucial for our facial recognition feature, where we compare uploaded photos with the user's selfie to ensure user uploads authentic pictures.
Here's a snippet of how I achieved this:
Future<XFile> _processIOSImage(XFile pickedFile) async {
// ... [Same implementation of bakeImageOrientation] ...
}
// In selfie capture
if (Platform.isIOS && byCamera) {
file = await _processIOSImage(file);
}
// In image upload from gallery
if (Platform.isIOS) {
file = await _processIOSImage(file);
}
This issue is stale because it has been open for 30 days with no activity.
Cuando la cámara frontal está en modo de acercamiento, la imagen tiene unas dimensiones de 2316×3088 y no se detecta mi rostro. Cuando la cámara está en modo de alejamiento, la imagen tiene unas dimensiones de 3024×4032 y sí se detecta mi rostro.
cuando pones esas dimensiones en el ImagePicker, va a funcionar
Hi, Google, can you please incorporate this into the plugin? We are lucky that some has shared this enhancement needed to make MLKit Face detection work with the front camera on iOS. https://github.com/flutter-ml/google_ml_kit_flutter/issues/570#issuecomment-2116534436