google_ml_kit_flutter icon indicating copy to clipboard operation
google_ml_kit_flutter copied to clipboard

Face detection not working on front camera (iOS)

Open giordy16 opened this issue 1 year ago • 17 comments

Face detection not working on front camera (iOS)

I am using the example app, and when I try to take a picture using my front camera, the faces found is always 0. If use the back camera, everything works.

Steps to reproduce the behavior:

  1. Go to 'Face detection'
  2. Click on the bottom left icon, then on 'Take a picture'
  3. Take a selfie with the front camera
  4. See error

Platform:

  • OS: iOS
  • Device: iPhone 12 Pro & 14
  • OS: iOS 17.1.1
  • Flutter Version: 3.16.7
  • Plugin version: 0.9.0

giordy16 avatar Jan 16 '24 16:01 giordy16

EDIT: it's actually working sometimes. On the UI of the front camera, there is little button to do a little zoom in/out. If I take the picture with the zoom out the faces are detected, if the camera has that little zoom in (which is the default setting), it doesn't work

giordy16 avatar Jan 16 '24 16:01 giordy16

For face recognition, you should use an image with dimensions of at least 480x360 pixels. For ML Kit to accurately detect faces, input images must contain faces that are represented by sufficient pixel data. In general, each face you want to detect in an image should be at least 100x100 pixels. source: https://developers.google.com/ml-kit/vision/face-detection/android#input-image-guidelines

fbernaly avatar Jan 16 '24 16:01 fbernaly

When the front camera is in zoom-in mode, the picture has a dimension of 2316×3088, and my face is NOT detected. When the camera is in zoom-out mode, the picture has a dimension of 3024×4032, and my face is detected.

giordy16 avatar Jan 17 '24 11:01 giordy16

have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16

henseljahja avatar Feb 15 '24 15:02 henseljahja

have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16

no

giordy16 avatar Feb 16 '24 09:02 giordy16

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] avatar Apr 18 '24 12:04 github-actions[bot]

So I have narrowed down the issue to these points:

  1. Real-time face detection works in iOS when the InputImage has bgra8888 image format group, InputImage.fromBytes factory constructor is used (According to the example app)
  2. Face detection works fine with image selected from iOS Photos App, InputImage.fromFilePath factory constructor is used
  3. Face detection does not work with Image captured in iOS from camera plugin using the .takePicture function, InputImage.fromFilePath factory constructor is used
  4. Face detection works when the Image captured in iOS from camera plugin using the .takePicture function is saved to the Photos App in iOS and using the Photo App image path, InputImage.fromFilePath factory constructor is used

Overall this issue is related with File Path in iOS, maybe the library is not able to load the UIImage in Swift code when the image path is given from the captured image using camera plugin, but it is able to load the UIImage when the image path is given from Photos App

TecHaxter avatar Apr 29 '24 07:04 TecHaxter

@fbernaly can you remove the stale label from this issue and look into this issue?

TecHaxter avatar Apr 29 '24 07:04 TecHaxter

@TecHaxter : I have removed the stale label, but I do not have bandwidth to work on this. Feel free to fork the repo and submit your contribution. I will review your PR ASAP and release a new version ASAP.

fbernaly avatar Apr 29 '24 16:04 fbernaly

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

itzmail avatar May 17 '24 02:05 itzmail

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

Thanks. It's working to me.

CoderJava avatar May 30 '24 17:05 CoderJava

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

Thank you, this does work for me

tsukifell avatar Sep 04 '24 04:09 tsukifell

Thank you, This works on the IOS @itzmail

InputImage inputImage; if (Platform.isIOS) { final File? iosImageProcessed = await bakeImageOrientation(pickedFile); if (iosImageProcessed == null) { return []; } inputImage = InputImage.fromFilePath(iosImageProcessed.path); } else { inputImage = InputImage.fromFilePath(path); } print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage); print('Found ${faces.length} faces for picked file'); return faces; }

husnain067 avatar Sep 12 '24 21:09 husnain067

Thanks @itzmail this worked like a magi, made it very smooth without stress to detect image

Future<File?> bakeImageOrientation(XFile pickedFile) async {
  if (Platform.isIOS) {
    final directory = await getApplicationDocumentsDirectory();
    final path = directory.path;
    final filename = DateTime.now().millisecondsSinceEpoch.toString();

    final imglib.Image? capturedImage =
    imglib.decodeImage(await File(pickedFile.path).readAsBytes());

    if (capturedImage == null) {
      return null;
    }

    final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

    File imageToBeProcessed = await File('$path/$filename')
        .writeAsBytes(imglib.encodeJpg(orientedImage));

    return imageToBeProcessed;
  }
  return null;
}

akwa-peter avatar Oct 29 '24 14:10 akwa-peter

@giordy16 This solution worked for me. I'll leave the solution here in case someone else has the same issue! thanks a lot for mentioning this issue.

import 'dart:io';
import 'package:google_ml_kit/google_ml_kit.dart';
import 'package:image/image.dart' as imglib;
import 'package:image_picker/image_picker.dart';
import 'package:path_provider/path_provider.dart';


getImageAndDetectFaces() async {
    try {
      XFile imageFile = await _imagePicker.pickImage(
        source: ImageSource.camera,
        preferredCameraDevice: CameraDevice.front,
      );
      if (imageFile == null) return;      

      if (Platform.isIOS) {
        await Future.delayed(const Duration(milliseconds: 1000));
      }
      
      List<Face> faces =
          await processPickedFile(imageFile);
      
      if (faces.isEmpty) {        
        throw EvomException('Nenhum rosto foi detectado na imagem!');
      }
      return faces;
    } catch (e) {
      throw Exception('$e');
    }
}

processPickedFile(XFile pickedFile) async {
    final path = pickedFile?.path ?? null;
    if (path == null) {
      throw EvomException('Imagem não encontrada!');
    }
    InputImage inputImage;
    if (Platform.isIOS) {
      final File iosImageProcessed =
          await backImageOrientation(pickedFile);
      inputImage = InputImage.fromFilePath(iosImageProcessed.path);
    } else {
      inputImage = InputImage.fromFilePath(path);
    }
    print(
        'INPUT IMAGE PROCESSED: ${inputImage.filePath} - ${inputImage.imageType}');
    
    List<Face> faces = await _faceDetector.processImage(inputImage);
    print('Found ${_faces.length} faces for picked file');
    return faces;
}

bakeImageOrientation(XFile pickedFile) async {
    if (Platform.isIOS) {
      final directory = await getApplicationDocumentsDirectory();
      final path = directory.path;
      final filename = DateTime.now().millisecondsSinceEpoch.toString();

      final imglib.Image capturedImage =
          imglib.decodeImage(await File(pickedFile.path).readAsBytes());

      final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

      final imageToBeProcessed = await File('$path/$filename')
          .writeAsBytes(imglib.encodeJpg(orientedImage));

      return imageToBeProcessed;
    }
    return null;
}

The above solution has been incredibly helpful in resolving the zoomed-in Front Camera selfie issue on iOS devices and improving overall face detection accuracy. I've implemented this approach with some modifications to fit my specific use case, and I wanted to share how it benefited my project in case it helps others:

  1. Zoomed-out Selfies: The bakeImageOrientation function effectively addressed the IOS front camera default zoom-in issue, resulting in properly framed selfies.

  2. Improved Face Detection: By ensuring correct image orientation, the face detection accuracy significantly improved for both selfies and uploaded images.

  3. Consistency in Image Processing: I applied the same image processing technique to both selfie capture and image uploads from the gallery. This consistency was crucial for our facial recognition feature, where we compare uploaded photos with the user's selfie to ensure user uploads authentic pictures.

Here's a snippet of how I achieved this:

Future<XFile> _processIOSImage(XFile pickedFile) async {
  // ... [Same implementation of bakeImageOrientation] ...
}

// In selfie capture
if (Platform.isIOS && byCamera) {
  file = await _processIOSImage(file);
}

// In image upload from gallery
if (Platform.isIOS) {
  file = await _processIOSImage(file);
}

roman-khattak avatar Nov 18 '24 11:11 roman-khattak

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] avatar Feb 11 '25 12:02 github-actions[bot]

Cuando la cámara frontal está en modo de acercamiento, la imagen tiene unas dimensiones de 2316×3088 y no se detecta mi rostro. Cuando la cámara está en modo de alejamiento, la imagen tiene unas dimensiones de 3024×4032 y sí se detecta mi rostro.

cuando pones esas dimensiones en el ImagePicker, va a funcionar

lianhdez95 avatar Apr 15 '25 06:04 lianhdez95

Hi, Google, can you please incorporate this into the plugin? We are lucky that some has shared this enhancement needed to make MLKit Face detection work with the front camera on iOS. https://github.com/flutter-ml/google_ml_kit_flutter/issues/570#issuecomment-2116534436

billylo1 avatar Sep 02 '25 00:09 billylo1