Android | Super simple integration HMS ML Kit to achieve the largest face smile snapshot

Posted May 25, 20205 min read

Foreword

If you have some understanding of the face detection function of HMS ML Kit, I believe that you have already started to call the interface we provide to write your own APP. At present, there are small partners who feedback during the call of the interface. It is not clear how to use the interface MLMaxSizeFaceTransactor in the HMS ML Kit document. In order to let everyone understand our interface more deeply and facilitate the use in the scene, in this article, the editor is ready to throw a brick and attract jade. You can open your mind and try a lot. If there is a small partner who wants to learn more about more comprehensive and specific functions, please move to https://developer.huawei.com/consumer/cn/hms/huawei-mlkit .


Scenes

I believe that everyone has the experience of going out on May 1st and 11th.

Insert picture description here
Finally, it was difficult to find places with few people, but the photos I took were like this.

Insert picture description here
Such

Insert picture description here
There is also this

Insert picture description here

I do n t know, I have such a rich facial expression. . Are you tired? Every time you want to send a circle of friends, you have to spend an hour in hundreds of thousands of similar photos taken during the day to find a photo you can see. . .

Insert picture description here

In order to solve similar problems, HMS ML Kit provides the interface to track the largest face in the recognition screen, which can identify the largest face in the image, and facilitate the related operations and processing of the key targets in the tracked image. In this article, simply call the MLMaxSizeFaceTransactor interface to achieve the function of capturing the largest face smile.


Preparation before development

android studio installation

Is very simple, just download and install. Specific download link:
Android studio official website download link: https://developer.android.com/studio
Android studio installation process reference link: https://www.cnblogs.com/xiadewang/p/7820377.html

Add Huawei maven warehouse in project-level gradle

Open the AndroidStudio project-level build.gradle file

Insert picture description here

Add the following maven addresses incrementally:

buildscript {
     {
        maven {url 'http://developer.huawei.com/repo/'}
    }
}
allprojects {
    repositories {
        maven {url 'http://developer.huawei.com/repo/'}
    }
}

Add SDK dependencies to the application-level build.gradle

Insert picture description here

Automatically download the incremental incremental model in the AndroidManifest.xml file

To enable the application to automatically update the latest machine learning model to the user device after the user installs your application from the Huawei application market, add the following statement to the AndroidManifest.xml file of the application:

<manifest
    ...
    <meta-data
        android:name = "com.huawei.hms.ml.DEPENDENCY"
        android:value = "face" />
...
</manifest>

Apply for camera, access network and storage permissions in the AndroidManifest.xml file

<!-Camera permissions->
<uses-feature android:name = "android.hardware.camera" />
<uses-permission android:name = "android.permission.CAMERA" />
<!-Write permission->
<uses-permission android:name = "android.permission.WRITE_EXTERNAL_STORAGE" />

Key steps in code development

Dynamic permission application

@Override
public void onCreate(Bundle savedInstanceState) {
    ...
    if(! allPermissionsGranted()) {
        getRuntimePermissions();
    }

Create a face recognition detector

The face recognition detector can be created by the face recognition detection configurator MLFaceAnalyzerSetting .

MLFaceAnalyzerSetting setting =
                new MLFaceAnalyzerSetting.Factory()
                        .setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
                        .setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
                        .setMinFaceProportion(0.1f)
                        .setTracingAllowed(true)
                        .create();

Create an "MLMaxSizeFaceTransactor" object through MLMaxSizeFaceTransactor.Creator to process the largest face detected. The objectCreateCallback() method is called when the object is detected, and the objectUpdateCallback() method is called when the object is updated, in the method A square was marked on the face of the largest person identified by Overlay, and MLFaceEmotion was obtained through the detection result to identify the smile expression and trigger the photo.

MLMaxSizeFaceTransactor transactor = new MLMaxSizeFaceTransactor.Creator(analyzer, new MLResultTrailer <MLFace>() {
                @Override
                public void objectCreateCallback(int itemId, MLFace obj) {
                    LiveFaceAnalyseActivity.this.overlay.clear();
                    if(obj == null) {
                        return;
                    }
                    LocalFaceGraphic faceGraphic =
                            new LocalFaceGraphic(LiveFaceAnalyseActivity.this.overlay, obj, LiveFaceAnalyseActivity.this);
                    LiveFaceAnalyseActivity.this.overlay.addGraphic(faceGraphic);
                    MLFaceEmotion emotion = obj.getEmotions();
                    if(emotion.getSmilingProbability()> smilingPossibility) {
                        safeToTakePicture = false;
                        mHandler.sendEmptyMessage(TAKE_PHOTO);
                    }
                }

                @Override
                public void objectUpdateCallback(MLAnalyzer.Result <MLFace> var1, MLFace obj) {
                    LiveFaceAnalyseActivity.this.overlay.clear();
                    if(obj == null) {
                        return;
                    }
                    LocalFaceGraphic faceGraphic =
                            new LocalFaceGraphic(LiveFaceAnalyseActivity.this.overlay, obj, LiveFaceAnalyseActivity.this);
                    LiveFaceAnalyseActivity.this.overlay.addGraphic(faceGraphic);
                    MLFaceEmotion emotion = obj.getEmotions();
                    if(emotion.getSmilingProbability()> smilingPossibility && safeToTakePicture) {
                        safeToTakePicture = false;
                        mHandler.sendEmptyMessage(TAKE_PHOTO);
                    }
                }

                @Override
                public void lostCallback(MLAnalyzer.Result <MLFace> result) {
                    LiveFaceAnalyseActivity.this.overlay.clear();
                }

                @Override
                public void completeCallback() {
                    LiveFaceAnalyseActivity.this.overlay.clear();

                }
            }). create();
this.analyzer.setTransactor(transactor);

Use LensEngine.Creator to create LensEngine instances for video stream face detection

this.mLensEngine = new LensEngine.Creator(context, this.analyzer) .setLensType(this.lensType)
                .applyDisplayDimension(640, 480)
                .applyFps(25.0f)
                .enableAutomaticFocus(true)
                .create();

Start camera preview for face detection

this.mPreview.start(this.mLensEngine, this.overlay);

Previous link: Quick service common TOP3 audit minefield, but the audit will collapse!
Content source:[ https://developer.huawei.com/consumer/cn/forum/topicview?tid=0201256372685820478&fid=18] ( https://developer.huawei.com/consumer/cn/forum/topicview?tid=0201256372685820478&fid = 18)
Author:littlewhite