使用“android-vision”库保存实时检测到的人脸(跟踪人脸)图像

Posted

技术标签:

【中文标题】使用“android-vision”库保存实时检测到的人脸(跟踪人脸)图像【英文标题】:Save real-time detected face(Track Faces) image using 'android-vision' library 【发布时间】:2018-05-30 02:44:35 【问题描述】:

对于我的大学论文,我需要一个可以实时检测和识别人脸的安卓程序。我已经阅读了“android-vision”库并测试了示例代码。

https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker/app/src/main/java/com/google/android/gms/samples/vision/face/facetracker.

修改代码:

package com.google.android.gms.samples.vision.face.facetracker;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.os.AsyncTask;
import android.os.Environment;
import android.util.Log;
import android.widget.Toast;

import com.google.android.gms.samples.vision.face.facetracker.ui.camera.GraphicOverlay;
import com.google.android.gms.vision.face.Face;

import java.io.ByteArrayOutputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.net.Socket;
import java.text.SimpleDateFormat;
import java.util.Date;

/**
 * Graphic instance for rendering face position, orientation, and landmarks within an associated
 * graphic overlay view.
 */
class FaceGraphic extends GraphicOverlay.Graphic

    private static final float FACE_POSITION_RADIUS = 10.0f;
    private static final float ID_TEXT_SIZE = 40.0f;
    private static final float ID_Y_OFFSET = 50.0f;
    private static final float ID_X_OFFSET = -50.0f;
    private static final float BOX_STROKE_WIDTH = 5.0f;

    public Canvas canvas1;
    public Face face;
    int i =0;
    int flag = 0;

    private static final int COLOR_CHOICES[] = 
        Color.BLUE,
        Color.CYAN,
        Color.GREEN,
        Color.MAGENTA,
        Color.RED,
        Color.WHITE,
        Color.YELLOW
    ;
    private static int mCurrentColorIndex = 0;

    private Paint mFacePositionPaint;
    private Paint mIdPaint;
    private Paint mBoxPaint;

    private volatile Face mFace;
    private int mFaceId;
    private float mFaceHappiness;
    public Bitmap myBitmap ;
    FaceGraphic(GraphicOverlay overlay)
    
        super(overlay);

        mCurrentColorIndex = (mCurrentColorIndex + 1) % COLOR_CHOICES.length;
        final int selectedColor = COLOR_CHOICES[mCurrentColorIndex];

        mFacePositionPaint = new Paint();
        mFacePositionPaint.setColor(selectedColor);

        mIdPaint = new Paint();
        mIdPaint.setColor(selectedColor);
        mIdPaint.setTextSize(ID_TEXT_SIZE);

        mBoxPaint = new Paint();
        mBoxPaint.setColor(selectedColor);
        mBoxPaint.setStyle(Paint.Style.STROKE);
        mBoxPaint.setStrokeWidth(BOX_STROKE_WIDTH);
    

    void setId(int id)
    
        mFaceId = id;
        flag = 1;
    


    /**
     * Updates the face instance from the detection of the most recent frame.  Invalidates the
     * relevant portions of the overlay to trigger a redraw.
     */
    void updateFace(Face face)
    
        mFace = face;
        postInvalidate();
    

    /**
     * Draws the face annotations for position on the supplied canvas.
     */
    @Override
    public void draw(Canvas canvas)
    
        face = mFace;
        if (face == null)
        
            return;
        

        // Draws a circle at the position of the detected face, with the face's track id below.
        float x = translateX(face.getPosition().x + face.getWidth() / 2);
        float y = translateY(face.getPosition().y + face.getHeight() / 2);
 //       canvas.drawCircle(x, y, FACE_POSITION_RADIUS, mFacePositionPaint);
        canvas.drawText("id: " + mFaceId, x + ID_X_OFFSET, y + ID_Y_OFFSET, mIdPaint);
  //      canvas.drawText("happiness: " + String.format("%.2f", face.getIsSmilingProbability()), x - ID_X_OFFSET, y - ID_Y_OFFSET, mIdPaint);
  //      canvas.drawText("right eye: " + String.format("%.2f", face.getIsRightEyeOpenProbability()), x + ID_X_OFFSET * 2, y + ID_Y_OFFSET * 2, mIdPaint);
  //      canvas.drawText("left eye: " + String.format("%.2f", face.getIsLeftEyeOpenProbability()), x - ID_X_OFFSET*2, y - ID_Y_OFFSET*2, mIdPaint);

        // Draws a bounding box around the face.
        float xOffset = scaleX(face.getWidth() / 2.0f);
        float yOffset = scaleY(face.getHeight() / 2.0f);
        float left = x - xOffset;
        float top = y - yOffset;
        float right = x + xOffset;
        float bottom = y + yOffset;
        canvas.drawRect(left, top, right, bottom, mBoxPaint);

        Log.d("MyTag", "hello "+i);
        i++;

        if (flag == 1)
        
            flag = 0;
            canvas1=canvas;
            // send face image to server for recognition
            new MyAsyncTask().execute("ppppp");

        
    


    class MyAsyncTask extends AsyncTask<String, Void, String>
    
        private Context context;

        public MyAsyncTask()
        
            // TODO Auto-generated constructor stub
            //context = applicationContext;
        

        protected String doInBackground(String... params)
        
            try
            

                Log.d("MyTag", "face.getWidth() "+face.getWidth());
                Bitmap temp_bitmap = Bitmap.createBitmap((int)face.getWidth(), (int)face.getHeight(), Bitmap.Config.RGB_565);
                canvas1.setBitmap(temp_bitmap);


            
            catch (Exception e)
            
                Log.e("MyTag", "I got an error", e);
                e.printStackTrace();
            
            Log.d("MyTag", "doInBackground");
            return null;
        

        protected void onPostExecute(String result) 
            Log.d("MyTag", "onPostExecute " + result);
            // tv2.setText(s);

        

    


它给了我这个错误:

12-16 03:08:00.310 22926-23044/com.google.android.gms.samples.vision.face.facetracker E/MyTag: I got an error
                                                                                               java.lang.UnsupportedOperationException
                                                                                                   at android.view.HardwareCanvas.setBitmap(HardwareCanvas.java:39)
                                                                                                   at com.google.android.gms.samples.vision.face.facetracker.FaceGraphic$MyAsyncTask.doInBackground(FaceGraphic.java:175)
                                                                                                   at com.google.android.gms.samples.vision.face.facetracker.FaceGraphic$MyAsyncTask.doInBackground(FaceGraphic.java:158)

此代码可以实时检测人脸。对于识别部分,我打算使用'JavaCV'https://github.com/bytedeco/javacv。如果我可以在位图中捕获脸部,那么我可以将它保存在 .jpg 图像中,然后我就可以识别它。您能否给我一些建议,如何保存检测到的人脸。谢谢你。

【问题讨论】:

我有你的问题。你找到解决办法了吗?? 没有。我正在寻找答案。 你找到答案了吗? 【参考方案1】:

TL;DR:捕获帧,处理它,然后保存/导出。

From the source

@Override
public void setBitmap(Bitmap bitmap) 
    throw new UnsupportedOperationException();

这意味着Canvas 无法处理setBitmap(Bitmap bitmap) 方法

你的表演有几个问题。

首先:AsynkTask(s) 的负载,其中许多是无用/冗余的

如果您使用com.google.android.gms.vision.* 类,那么您每秒可能会收到大约 30 个事件。当事件发生时,几乎可以确保捕获的帧与评估的帧不同。 You are racing against your conditions.

二:使用Canvas设置Bitmap

在使用类时,请务必检查其文档和祖先,最后检查其实现。

ImageView 将按照您的意愿执行。通过接收位图,然后对其进行设置。所有的比赛条件都将由操作系统处理Main Looper 丢弃

终于

如果您需要的可能是“拍照,当有人闭着眼睛微笑时”,那么您需要颠倒您的逻辑。使用源生成帧。然后处理 Frame,如果它符合您的条件,则保存它。

This codelabs project does almost what you want,它的细节解释得很好

【讨论】:

以上是关于使用“android-vision”库保存实时检测到的人脸(跟踪人脸)图像的主要内容,如果未能解决你的问题,请参考以下文章

如何在 Android 视觉 CameraSource 中添加放大/缩小手势

使用人脸检测 google-vision 检测到人脸时拍照

Mediapipe+VSCode+Anaconda 实时检测手部关键点并保存视频

今天大佬教你用Python3-OpenCV实现实时摄像头人脸检测

教程 | 如何使用DockerTensorFlow目标检测API和OpenCV实现实时目标检测和视频处理

泡泡图灵智库RTM3D:在自动驾驶场景中基于关键点的实时单目三维检测