React-native-firebase: MLKit Vision faceDetectorProcessImage() is not detecting all classifications

Created on 30 Apr 2020  路  10Comments  路  Source: invertase/react-native-firebase

Issue

MLKit Vision library: faceDetectorProcessImage is identifying only boundingBox, headEulerAngleZ, headEulerAngleY in iOS.

It is returning -1 for rightEyeOpenProbability, leftEyeOpenProbability, smilingProbability. Getting only empty arrays for landmarks & faceContours.

Same is working in Android. Tested with images taken from both front camera &, back camera.

This is not working in iOS, even though the faceDetectorOptions are passed correctly. See below code snippet.

Project Files

takePicture = async () => {
console.log("takePicture");
if (this.camera) {
const options = { quality: 0.5, base64: true, fixOrientation: true, orientation: RNCamera.Constants.ORIENTATION_UP, forceUpOrientation: true };
const data = await this.camera.takePictureAsync(options);
console.log(data.uri);
CameraRoll.saveToCameraRoll(data.uri);
console.log('photo saved to Camera Roll');
this.processFaceDetection(data.uri).then(() => console.log('Finished processing file.'));
}
};

async processFaceDetection(localPath) {
console.log('Face Detection to be processed from file ==' + localPath);

const detectorOptions = {
  classificationMode: VisionFaceDetectorClassificationMode.ALL_CLASSIFICATIONS,
  contourMode: VisionFaceDetectorContourMode.ALL_CONTOURS,
  landmarkMode: VisionFaceDetectorLandmarkMode.ALL_LANDMARKS,
  performanceMode: VisionFaceDetectorPerformanceMode.ACCURATE
}

const faces = await vision().faceDetectorProcessImage(localPath, detectorOptions);

console.log('Faces detected from image ==' + JSON.stringify(faces));

faces.forEach(face => {
  console.log('Head rotation on Y axis: ', face.headEulerAngleY);
  console.log('Head rotation on Z axis: ', face.headEulerAngleZ);

  console.log('Left eye open probability: ', face.leftEyeOpenProbability);
  console.log('Right eye open probability: ', face.rightEyeOpenProbability);
  console.log('Smiling probability: ', face.smilingProbability);

  face.faceContours.forEach(contour => {
    if (contour.type === VisionFaceContourType.FACE) {
      console.log('Face outline points: ', contour.points);
    }
  });
});

}

iOS

Click To Expand

#### `ios/Podfile`: - [ ] I'm not using Pods - [x] I'm using Pods and my Podfile looks like: target 'RecognizeText' do # Pods for RecognizeText pod 'FBLazyVector', :path => "../node_modules/react-native/Libraries/FBLazyVector" pod 'FBReactNativeSpec', :path => "../node_modules/react-native/Libraries/FBReactNativeSpec" pod 'RCTRequired', :path => "../node_modules/react-native/Libraries/RCTRequired" pod 'RCTTypeSafety', :path => "../node_modules/react-native/Libraries/TypeSafety" pod 'React', :path => '../node_modules/react-native/' pod 'React-Core', :path => '../node_modules/react-native/' pod 'React-CoreModules', :path => '../node_modules/react-native/React/CoreModules' pod 'React-Core/DevSupport', :path => '../node_modules/react-native/' pod 'React-RCTActionSheet', :path => '../node_modules/react-native/Libraries/ActionSheetIOS' pod 'React-RCTAnimation', :path => '../node_modules/react-native/Libraries/NativeAnimation' pod 'React-RCTBlob', :path => '../node_modules/react-native/Libraries/Blob' pod 'React-RCTImage', :path => '../node_modules/react-native/Libraries/Image' pod 'React-RCTLinking', :path => '../node_modules/react-native/Libraries/LinkingIOS' pod 'React-RCTNetwork', :path => '../node_modules/react-native/Libraries/Network' pod 'React-RCTSettings', :path => '../node_modules/react-native/Libraries/Settings' pod 'React-RCTText', :path => '../node_modules/react-native/Libraries/Text' pod 'React-RCTVibration', :path => '../node_modules/react-native/Libraries/Vibration' pod 'React-Core/RCTWebSocket', :path => '../node_modules/react-native/' pod 'React-cxxreact', :path => '../node_modules/react-native/ReactCommon/cxxreact' pod 'React-jsi', :path => '../node_modules/react-native/ReactCommon/jsi' pod 'React-jsiexecutor', :path => '../node_modules/react-native/ReactCommon/jsiexecutor' pod 'React-jsinspector', :path => '../node_modules/react-native/ReactCommon/jsinspector' pod 'ReactCommon/callinvoker', :path => "../node_modules/react-native/ReactCommon" pod 'ReactCommon/turbomodule/core', :path => "../node_modules/react-native/ReactCommon" pod 'Yoga', :path => '../node_modules/react-native/ReactCommon/yoga', :modular_headers => true pod 'DoubleConversion', :podspec => '../node_modules/react-native/third-party-podspecs/DoubleConversion.podspec' pod 'glog', :podspec => '../node_modules/react-native/third-party-podspecs/glog.podspec' pod 'Folly', :podspec => '../node_modules/react-native/third-party-podspecs/Folly.podspec' pod 'react-native-cameraroll', :path => '../node_modules/@react-native-community/cameraroll' target 'RecognizeTextTests' do inherit! :complete # Pods for testing end use_native_modules! # Enables Flipper. # # Note that if you have use_frameworks! enabled, Flipper will not work and # you should disable these next few lines. add_flipper_pods! post_install do |installer| flipper_post_install(installer) end end


Environment

Click To Expand

**`react-native info` output:**

info Fetching system and libraries information...
System:
    OS: macOS 10.15.4
    CPU: (4) x64 Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
    Memory: 114.88 MB / 8.00 GB
    Shell: 3.2.57 - /bin/bash
  Binaries:
    Node: 13.13.0 - /usr/local/bin/node
    Yarn: 1.22.4 - /usr/local/bin/yarn
    npm: 6.14.4 - /usr/local/bin/npm
    Watchman: 4.9.0 - /usr/local/bin/watchman
  Managers:
    CocoaPods: 1.9.1 - /usr/local/bin/pod
  SDKs:
    iOS SDK:
      Platforms: iOS 13.2, DriverKit 19.0, macOS 10.15, tvOS 13.2, watchOS 6.1
    Android SDK:
      API Levels: 26, 28, 29
      Build Tools: 28.0.3, 29.0.0
      System Images: android-28 | Google APIs Intel x86 Atom
      Android NDK: Not Found
  IDEs:
    Android Studio: 3.6 AI-192.7142.36.36.6392135
    Xcode: 11.3.1/11C504 - /usr/bin/xcodebuild
  Languages:
    Java: 1.8.0_212 - /usr/bin/javac
    Python: 2.7.16 - /usr/bin/python
  npmPackages:
    @react-native-community/cli: Not Found
    react: 16.11.0 => 16.11.0 
    react-native: 0.62.2 => 0.62.2 
  npmGlobalPackages:
    *react-native*: Not Found
- **Platform that you're experiencing the issue on**: - [x] iOS - [ ] Android - [ ] **iOS** but have not tested behavior on Android - [ ] **Android** but have not tested behavior on iOS - [ ] Both - **`Firebase` module(s) you're using that has the issue:** - `e.g. Instance ID` - **Are you using `TypeScript`?** - `N`

iOS ML Stale

Most helpful comment

Hi, I check this issue because it seems to be related to my issue that I reported before about faceDetectorOption (https://github.com/invertase/react-native-firebase/issues/3402). When I try the above fix that @ahlun mentioned, my issue has been solved. Thank you.

All 10 comments

Are you able to debug on XCode? When the response comes back from Firebase before sending back to JS?

Hi @Ehesp , have debugged this issue in XCode with breakpoints in RNFBMLVisionFaceDetectorModule.m.

Even though I am sending the faceDetectorOptions as a param in faceDetectorProcessImage() method, it is not being recognized in RNFBMLVisionFaceDetectorModule.m. The values such as classificationMode, contourMode etc are received in Native modulein iOS with "0" even though I am sending "2" (VisionFaceDetectorClassificationMode.ALL_CLASSIFICATIONS, VisionFaceDetectorContourMode.ALL_CONTOURS) from React Native.

Here is how I am sending faceDetectorOptions from React Native. Please review if the data format that is being sent is correct. Hope this is as per: https://rnfirebase.io/reference/ml-vision/visionfacedetectoroptions

const detectorOptions = {
  classificationMode: VisionFaceDetectorClassificationMode.ALL_CLASSIFICATIONS,
  contourMode: VisionFaceDetectorContourMode.ALL_CONTOURS,
  landmarkMode: VisionFaceDetectorLandmarkMode.ALL_LANDMARKS,
  performanceMode: VisionFaceDetectorPerformanceMode.ACCURATE
}
const faces = await vision().faceDetectorProcessImage(localPath, detectorOptions);

In XCode: faceDetectorOptions is declared as NSDictionary.

RCT_EXPORT_METHOD(faceDetectorProcessImage:
(FIRApp *) firebaseApp
: (NSString *)filePath
: (NSDictionary *)faceDetectorOptions
: (RCTPromiseResolveBlock)resolve
: (RCTPromiseRejectBlock)reject
) {

The native library code is reading the value like below. In debugger this is coming as "0"

NSInteger *classificationMode = [faceDetectorOptions[@"classificationMode"]

Appreciate your help in resolving this.

Thanks for your help. I'll get it assigned so it can be investigated.

Hey @raymishtech, I'm running from my device, and using your code snippet. Here are the values in Xcode:
Screenshot 2020-05-05 at 09 59 42

The logs from JS showed the results of the face detection coming through as expected:
Screenshot 2020-05-05 at 10 00 03

I can't reproduce your issue I'm afraid. Not sure what you're doing wrong. Have you added this property to your firebase.json file?

@russellwheatley Yes, ml_vision_face_model property is enabled. The problem here is the values for detectorOptions passed from React Native JS is not being passed on to Native iOS modules correctly.

It is recognizing minFaceSize property which is of type double. I have sent 0.2 and it receives it as 0.2 in iOS debugger. But same is not working for any of the NSInteger types such as classificationMode, contourMode, landmarkMode & performanceMode. They are all coming as empty like below.

Screenshot 2020-05-05 at 10 26 17 PM

Screenshot 2020-05-05 at 10 29 47 PM

Below is the JSON that I tried passing:

`const detectorOptions = {

  // classificationMode: VisionFaceDetectorClassificationMode.ALL_CLASSIFICATIONS,
  classificationMode: 1,
  contourMode: 2,
  // contourMode: VisionFaceDetectorContourMode.ALL_CONTOURS,
  landmarkMode: VisionFaceDetectorLandmarkMode.ALL_LANDMARKS,
  performanceMode: VisionFaceDetectorPerformanceMode.ACCURATE,
  minFaceSize: 0.2
}
console.log(detectorOptions);
const faces = await vision().faceDetectorProcessImage(localPath, detectorOptions);`

Where am I going wrong? This is very strange. Is there any known issue in passing Integer or Long types to NSDictionary type? Same is working in Android.

Below is the snippet from package.json. Anything to do with latest version of React Native in handling data types conversion?

"@react-native-firebase/app": "^6.7.1", "@react-native-firebase/ml-natural-language": "^6.7.1", "@react-native-firebase/ml-vision": "^6.7.1", "react": "16.11.0", "react-native": "0.62.2",

I am having the same issue on ^6.7.1 in iOS, and did some debugging, the issues seems to be errors when converting faceDetectorOptions JSON/dictionary into nativeFIRVisionFaceDetectorOptions in RNFBMLVisionFaceDetectorModule.m

The use of pointerValue seems to be incorrect here.

    NSInteger *classificationMode = [faceDetectorOptions[@"classificationMode"] pointerValue];
    if (classificationMode == (NSInteger *) 1) {
      options.classificationMode = FIRVisionFaceDetectorClassificationModeNone;
    } else if (classificationMode == (NSInteger *) 2) {
      options.classificationMode = FIRVisionFaceDetectorClassificationModeAll;
    }

    NSInteger *contourMode = [faceDetectorOptions[@"contourMode"] pointerValue];
    if (contourMode == (NSInteger *) 1) {
      options.contourMode = FIRVisionFaceDetectorContourModeNone;
    } else if (contourMode == (NSInteger *) 2) {
      options.contourMode = FIRVisionFaceDetectorContourModeAll;
    }

    NSInteger *landmarkMode = [faceDetectorOptions[@"landmarkMode"] pointerValue];
    if (landmarkMode == (NSInteger *) 1) {
      options.landmarkMode = FIRVisionFaceDetectorLandmarkModeNone;
    } else if (landmarkMode == (NSInteger *) 2) {
      options.landmarkMode = FIRVisionFaceDetectorLandmarkModeAll;
    }

    NSInteger *performanceMode = [faceDetectorOptions[@"performanceMode"] pointerValue];
    if (performanceMode == (NSInteger *) 1) {
      options.performanceMode = FIRVisionFaceDetectorPerformanceModeFast;
    } else if (performanceMode == (NSInteger *) 2) {
      options.performanceMode = FIRVisionFaceDetectorPerformanceModeAccurate;
    }

Just changing pointerValue from the code above into integerValue seems to fix it for me. The extra casting to (NSInteger *) seems to have no impact as the arithmetic is the same, but probably can be removed too.

Hi, I check this issue because it seems to be related to my issue that I reported before about faceDetectorOption (https://github.com/invertase/react-native-firebase/issues/3402). When I try the above fix that @ahlun mentioned, my issue has been solved. Thank you.

@ahlun Thank you. Yes, this option is fixing the issue. I have replaced pointerValue to integerValue and added casting using (NSInteger *). Now I am getting all the classifications., landmark etc.

Below is the code snippet that was updated in RNFBMLVision/RNFBMLVisionFaceDetectorModule.m:

    // NSInteger *classificationMode = [faceDetectorOptions[@"classificationMode"] pointerValue];
    NSInteger *classificationMode = (NSInteger *)[faceDetectorOptions[@"classificationMode"] integerValue];
    if (classificationMode == (NSInteger *) 1) {
      options.classificationMode = FIRVisionFaceDetectorClassificationModeNone;
    } else if (classificationMode == (NSInteger *) 2) {
      options.classificationMode = FIRVisionFaceDetectorClassificationModeAll;
    }

    // NSInteger *contourMode = [faceDetectorOptions[@"contourMode"] pointerValue];
    NSInteger *contourMode = (NSInteger *)[faceDetectorOptions[@"contourMode"] integerValue];
    if (contourMode == (NSInteger *) 1) {
      options.contourMode = FIRVisionFaceDetectorContourModeNone;
    } else if (contourMode == (NSInteger *) 2) {
      options.contourMode = FIRVisionFaceDetectorContourModeAll;
    }

    // NSInteger *landmarkMode = [faceDetectorOptions[@"landmarkMode"] pointerValue];
    NSInteger *landmarkMode = (NSInteger *)[faceDetectorOptions[@"landmarkMode"] integerValue];
    if (landmarkMode == (NSInteger *) 1) {
      options.landmarkMode = FIRVisionFaceDetectorLandmarkModeNone;
    } else if (landmarkMode == (NSInteger *) 2) {
      options.landmarkMode = FIRVisionFaceDetectorLandmarkModeAll;
    }

    // NSInteger *performanceMode = [faceDetectorOptions[@"performanceMode"] pointerValue];
    NSInteger *performanceMode = (NSInteger *)[faceDetectorOptions[@"performanceMode"] integerValue];
    if (performanceMode == (NSInteger *) 1) {
      options.performanceMode = FIRVisionFaceDetectorPerformanceModeFast;
    } else if (performanceMode == (NSInteger *) 2) {
      options.performanceMode = FIRVisionFaceDetectorPerformanceModeAccurate;
    }

    options.minFaceSize = (CGFloat) [faceDetectorOptions[@"minFaceSize"] doubleValue];

@Ehesp @russellwheatley - Can this be fixed in the library?

Hey @ahlun, great work debugging. We will at some point get round to checking and fixing the problem. Although, there is nothing stopping anyone from making a PR should anyone wish to submit one :)

Hello 馃憢, to help manage issues we automatically close stale issues.
This issue has been automatically marked as stale because it has not had activity for quite some time. Has this issue been fixed, or does it still require the community's attention?

This issue will be closed in 15 days if no further activity occurs.
Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings