为啥 Google 的 ML 人脸检测工具包在 .process() 上崩溃
Posted
技术标签:
【中文标题】为啥 Google 的 ML 人脸检测工具包在 .process() 上崩溃【英文标题】:Why is Google's ML Face Detection Kit crashing on .process()为什么 Google 的 ML 人脸检测工具包在 .process() 上崩溃 【发布时间】:2021-07-16 22:07:25 【问题描述】:我正在创建一个面部检测器应用程序,它将实时检测面部并识别面部上的地标。人脸的地标工作得很好,但是我的实时人脸检测根本没有工作。
我按照 Google 的 ML Kit(https://developers.google.com/ml-kit/vision/face-detection/android) 中的说明进行操作,但我真的很难获得实时人脸检测的功能。
在我的调试器中,代码在facedetector.process(image).addOnSuccessListener()
崩溃,而是进入onFailure()
这是我的实时人脸检测部分的代码(我已经评论了一些部分+减少了冗余)。
@Override
//process method to detect frame by frame in real time face detection
public void process(@NonNull Frame frame)
int width = frame.getSize().getWidth();
int height = frame.getSize().getHeight();
InputImage image = InputImage.fromByteArray(
frame.getData(),
/* image width */width,
/* image height */height,
//if camera is facing front rotate image 90, else 270 degrees
(cameraFacing != Facing.FRONT) ? 90 : 270,
InputImage.IMAGE_FORMAT_YUV_420_888 // or IMAGE_FORMAT_YV12
);
FaceDetectorOptions faceDetectorOptions = new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
//setting contour mode to detect all facial contours in real time
.build();
FaceDetector faceDetector = FaceDetection.getClient(faceDetectorOptions);
faceDetector.process(image).addOnSuccessListener(new OnSuccessListener<List<Face>>()
@Override
public void onSuccess(@NonNull List<Face> faces)
imageView.setImageBitmap(null);
Bitmap bitmap = Bitmap.createBitmap(height, width, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
Paint dotPaint = new Paint();
dotPaint.setColor(Color.YELLOW);
dotPaint.setStyle(Paint.Style.FILL);
dotPaint.setStrokeWidth(6f);
Paint linePaint = new Paint();
linePaint.setColor(Color.GREEN);
linePaint.setStyle(Paint.Style.STROKE);
linePaint.setStrokeWidth(4f);
//looping through each face to detect each contour
for (Face face : faces)
List<PointF> faceContours = face.getContour(
FaceContour.FACE
).getPoints();
for (int i = 0; i < faceContours.size(); i++)
PointF faceContour = null;
if (i != (faceContours.size() - 1))
faceContour = faceContours.get(i);
canvas.drawLine(
faceContour.x, faceContour.y, faceContours.get(i + 1).x, faceContours.get(i + 1).y, linePaint
);
else //if at the last point, draw to the first point
canvas.drawLine(faceContour.x, faceContour.y, faceContours.get(0).x, faceContours.get(0).y, linePaint);
canvas.drawCircle(faceContour.x, faceContour.y, 4f, dotPaint);
//end inner loop
List<PointF> leftEyebrowTopCountours = face.getContour(
FaceContour.LEFT_EYEBROW_TOP).getPoints();
for (int i = 0; i < leftEyebrowTopCountours.size(); i++)
PointF leftEyebrowTopContour = leftEyebrowTopCountours.get(i);
if (i != (leftEyebrowTopCountours.size() - 1))
canvas.drawLine(leftEyebrowTopContour.x, leftEyebrowTopContour.y, leftEyebrowTopCountours.get(i + 1).x, leftEyebrowTopCountours.get(i + 1).y, linePaint);
canvas.drawCircle(leftEyebrowTopContour.x, leftEyebrowTopContour.y, 4f, dotPaint);
旁注:我在模拟器中使用 Pixel 2 API 29。因为我只是通过轮廓,所以我省略了重复的代码
完整代码供参考:
import com.google.android.gms.tasks.OnFailureListener;
import com.google.android.gms.tasks.OnSuccessListener;
import com.google.android.material.bottomsheet.BottomSheetBehavior;
import com.google.mlkit.vision.common.InputImage;
import com.google.mlkit.vision.face.Face;
import com.google.mlkit.vision.face.FaceContour;
import com.google.mlkit.vision.face.FaceDetection;
import com.google.mlkit.vision.face.FaceDetector;
import com.google.mlkit.vision.face.FaceDetectorOptions;
import com.google.mlkit.vision.face.FaceLandmark;
import com.otaliastudios.cameraview.CameraView;
import com.otaliastudios.cameraview.controls.Facing;
import com.otaliastudios.cameraview.frame.Frame;
import com.otaliastudios.cameraview.frame.FrameProcessor;
import com.theartofdev.edmodo.cropper.CropImage;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
public class MainActivity extends AppCompatActivity implements FrameProcessor
private Facing cameraFacing = Facing.FRONT;
private ImageView imageView;
private CameraView faceDetectionCameraView;
private RecyclerView bottomSheetRecyclerView;
private BottomSheetBehavior bottomSheetBehavior;
private ArrayList<FaceDetectionModel> faceDetectionModels;
@Override
protected void onCreate(Bundle savedInstanceState)
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Toolbar toolbar = findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
faceDetectionModels = new ArrayList<>();
bottomSheetBehavior = BottomSheetBehavior.from(findViewById(R.id.bottom_sheet));
imageView = findViewById(R.id.face_detection_image_view);
faceDetectionCameraView = findViewById(R.id.face_detection_camera_view);
Button toggle = findViewById(R.id.face_detection_cam_toggle_button);
FrameLayout bottomSheetButton = findViewById(R.id.bottom_sheet_button);
bottomSheetRecyclerView = findViewById(R.id.bottom_sheet_recycler_view);
faceDetectionCameraView.setFacing(cameraFacing);
faceDetectionCameraView.setLifecycleOwner(MainActivity.this);
faceDetectionCameraView.addFrameProcessor(MainActivity.this);
toggle.setOnClickListener(new View.OnClickListener()
@Override
public void onClick(View v)
cameraFacing = (cameraFacing == Facing.FRONT) ? Facing.BACK : Facing.FRONT;
faceDetectionCameraView.setFacing(cameraFacing);
);
bottomSheetButton.setOnClickListener(new View.OnClickListener()
@Override
public void onClick(View v)
CropImage.activity().start(MainActivity.this);
);
bottomSheetRecyclerView.setLayoutManager(new LinearLayoutManager(MainActivity.this));
bottomSheetRecyclerView.setAdapter(new FaceDetectionAdapter(faceDetectionModels, MainActivity.this));
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data)
super.onActivityResult(requestCode, resultCode, data);
if(requestCode == CropImage.CROP_IMAGE_ACTIVITY_REQUEST_CODE)
CropImage.ActivityResult result = CropImage.getActivityResult(data);
if(resultCode == RESULT_OK)
Uri imageUri = result.getUri();
try
analyseImage(MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri));
catch (IOException e)
e.printStackTrace();
private void analyseImage(Bitmap bitmap)
if(bitmap == null)
Toast.makeText(this, "There was an error", Toast.LENGTH_SHORT).show();
//imageView.setImageBitmap(null);
faceDetectionModels.clear();
Objects.requireNonNull(bottomSheetRecyclerView.getAdapter()).notifyDataSetChanged();
bottomSheetBehavior.setState(BottomSheetBehavior.STATE_COLLAPSED);
showProgress();
InputImage firebaseInputImage = InputImage.fromBitmap(bitmap, 0);
FaceDetectorOptions options =
new FaceDetectorOptions.Builder()
.setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
.setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
.setClassificationMode(FaceDetectorOptions.CLASSIFICATION_MODE_ALL)
.build();
FaceDetector faceDetector = FaceDetection.getClient(options);
faceDetector.process(firebaseInputImage)
.addOnSuccessListener(new OnSuccessListener<List<Face>>()
@Override
public void onSuccess(@NonNull List<Face> faces)
Bitmap mutableImage = bitmap.copy(Bitmap.Config.ARGB_8888, true);
detectFaces(faces, mutableImage);
imageView.setImageBitmap(mutableImage);
hideProgress();
bottomSheetRecyclerView.getAdapter().notifyDataSetChanged();
bottomSheetBehavior.setState(BottomSheetBehavior.STATE_EXPANDED);
)
.addOnFailureListener(new OnFailureListener()
@Override
public void onFailure(@NonNull Exception e)
Toast.makeText(MainActivity.this, "There was an error", Toast.LENGTH_SHORT).show();
hideProgress();
);
private void detectFaces(List<Face> faces, Bitmap bitmap)
if(faces == null || bitmap == null)
Toast.makeText(this, "There was an error", Toast.LENGTH_SHORT).show();
return;
Canvas canvas = new Canvas(bitmap);
Paint facePaint = new Paint();
facePaint.setColor(Color.GREEN);
facePaint.setStyle(Paint.Style.STROKE);
facePaint.setStrokeWidth(5f);
Paint faceTextPaint = new Paint();
faceTextPaint.setColor(Color.BLUE);
faceTextPaint.setTextSize(30f);
faceTextPaint.setTypeface(Typeface.SANS_SERIF);
Paint landmarkPaint = new Paint();
landmarkPaint.setColor(Color.YELLOW);
landmarkPaint.setStyle(Paint.Style.FILL);
landmarkPaint.setStrokeWidth(8f);
for(int i = 0; i < faces.size(); i++)
canvas.drawRect(faces.get(i).getBoundingBox(), facePaint);
canvas.drawText("Face" + i,
(faces.get(i).getBoundingBox().centerX()
-(faces.get(i).getBoundingBox().width() >> 1) + 8f),
(faces.get(i).getBoundingBox().centerY() + (faces.get(i).getBoundingBox().height() >> 1) - 8f), facePaint);
Face face = faces.get(i); //get one face
if(face.getLandmark(FaceLandmark.LEFT_EYE) != null)
FaceLandmark leftEye = face.getLandmark(FaceLandmark.LEFT_EYE);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(leftEye.getPosition().x, leftEye.getPosition().y, 8f, landmarkPaint);
if(face.getLandmark(FaceLandmark.RIGHT_EYE) != null)
FaceLandmark rightEye = face.getLandmark(FaceLandmark.RIGHT_EYE);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(rightEye.getPosition().x, rightEye.getPosition().y, 8f, landmarkPaint);
if(face.getLandmark(FaceLandmark.NOSE_BASE) != null)
FaceLandmark noseBase = face.getLandmark(FaceLandmark.NOSE_BASE);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(noseBase.getPosition().x, noseBase.getPosition().y, 8f, landmarkPaint);
if(face.getLandmark(FaceLandmark.LEFT_EAR) != null)
FaceLandmark leftEar = face.getLandmark(FaceLandmark.LEFT_EAR);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(leftEar.getPosition().x, leftEar.getPosition().y, 8f, landmarkPaint);
if(face.getLandmark(FaceLandmark.RIGHT_EAR) != null)
FaceLandmark rightEar = face.getLandmark(FaceLandmark.RIGHT_EAR);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(rightEar.getPosition().x, rightEar.getPosition().y, 8f, landmarkPaint);
if(face.getLandmark(FaceLandmark.MOUTH_LEFT) != null && face.getLandmark(FaceLandmark.MOUTH_BOTTOM) != null && face.getLandmark(FaceLandmark.MOUTH_RIGHT) != null)
FaceLandmark mouthLeft = face.getLandmark(FaceLandmark.MOUTH_LEFT);
FaceLandmark mouthRight = face.getLandmark(FaceLandmark.MOUTH_RIGHT);
FaceLandmark mouthBottom = face.getLandmark(FaceLandmark.MOUTH_BOTTOM);
//Now we have our left eye, we draw a little circle
canvas.drawLine(mouthLeft.getPosition().x, mouthLeft.getPosition().y, mouthBottom.getPosition().x, mouthBottom.getPosition().y, landmarkPaint);
canvas.drawLine(mouthBottom.getPosition().x, mouthBottom.getPosition().y, mouthRight.getPosition().x, mouthRight.getPosition().y, landmarkPaint);
faceDetectionModels.add(new FaceDetectionModel(i, "Smiling probability"
+ face.getSmilingProbability()));
faceDetectionModels.add(new FaceDetectionModel(i, "Left eye open probability"
+ face.getLeftEyeOpenProbability()));
faceDetectionModels.add(new FaceDetectionModel(i, "Right eye open probability"
+ face.getRightEyeOpenProbability()));
private void showProgress()
findViewById(R.id.bottom_sheet_button_img).setVisibility(View.GONE);
findViewById(R.id.bottom_sheet_butotn_progress_bar).setVisibility(View.VISIBLE);
private void hideProgress()
findViewById(R.id.bottom_sheet_button_img).setVisibility(View.VISIBLE);
findViewById(R.id.bottom_sheet_butotn_progress_bar).setVisibility(View.GONE);
//real-time detection starts HERE
@Override
public void process(@NonNull Frame frame)
//setting up width and frame height
int width = frame.getSize().getWidth();
int height = frame.getSize().getHeight();
byte[] byteArray = frame.getData();
InputImage image = InputImage.fromByteArray(
//frame.getData()
byteArray,
width,
height,
//rotation
(cameraFacing == Facing.FRONT) ? 90 : 270,
//image format
InputImage.IMAGE_FORMAT_YV12 // or IMAGE_FORMAT_YV12
);
//Contour mode all is real time contour detection
FaceDetectorOptions realTimeOpts = new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
FaceDetector faceDetector = FaceDetection.getClient(realTimeOpts);
faceDetector.process(image).addOnSuccessListener(new OnSuccessListener<List<Face>>()
@Override
public void onSuccess(@NonNull List<Face> faces)
//don't have image yet set to null first
imageView.setImageBitmap(null);
//bitmap stores pixels of image
Bitmap bitmap = Bitmap.createBitmap(height, width, Bitmap.Config.ARGB_8888);
//canvas hold the draw calls -- write into the bitmap
Canvas canvas = new Canvas(bitmap);
//paint specifies what the canvas should draw
Paint dotPaint = new Paint();
dotPaint.setColor(Color.YELLOW);
dotPaint.setStyle(Paint.Style.FILL);
dotPaint.setStrokeWidth(6f);
Paint linePaint = new Paint();
linePaint.setColor(Color.GREEN);
linePaint.setStyle(Paint.Style.STROKE);
linePaint.setStrokeWidth(4f);
for (Face face : faces)
//fetching contours
List<PointF> faceContours = face.getContour(
FaceContour.FACE
).getPoints();
for (int i = 0; i < faceContours.size(); i++)
PointF faceContour = faceContours.get(i);
if (i != (faceContours.size() - 1))
canvas.drawLine(
//if not at last index, continue drawing to next index
faceContour.x, faceContour.y, faceContours.get(i + 1).x, faceContours.get(i + 1).y, linePaint
);
else
return;
//always draw circle
canvas.drawCircle(faceContour.x, faceContour.y, 4f, dotPaint);
//end inner loop
List<PointF> leftEyebrowTopCountours = face.getContour(
FaceContour.LEFT_EYEBROW_TOP).getPoints();
for (int i = 0; i < leftEyebrowTopCountours.size(); i++)
PointF leftEyebrowTopContour = leftEyebrowTopCountours.get(i);
if (i != (leftEyebrowTopCountours.size() - 1))
canvas.drawLine(leftEyebrowTopContour.x, leftEyebrowTopContour.y, leftEyebrowTopCountours.get(i + 1).x, leftEyebrowTopCountours.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(leftEyebrowTopContour.x, leftEyebrowTopContour.y, 4f, dotPaint);
List<PointF> rightEyebrowTopCountours = face.getContour(
FaceContour.RIGHT_EYEBROW_TOP).getPoints();
for (int i = 0; i < rightEyebrowTopCountours.size(); i++)
PointF rightEyebrowContour = rightEyebrowTopCountours.get(i);
if (i != (rightEyebrowTopCountours.size() - 1))
canvas.drawLine(rightEyebrowContour.x, rightEyebrowContour.y, rightEyebrowTopCountours.get(i + 1).x, rightEyebrowTopCountours.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(rightEyebrowContour.x, rightEyebrowContour.y, 4f, dotPaint);
List<PointF> rightEyebrowBottomCountours = face.getContour(
FaceContour.RIGHT_EYEBROW_BOTTOM).getPoints();
for (int i = 0; i < rightEyebrowBottomCountours.size(); i++)
PointF rightEyebrowBottomContour = rightEyebrowBottomCountours.get(i);
if (i != (rightEyebrowBottomCountours.size() - 1))
canvas.drawLine(rightEyebrowBottomContour.x, rightEyebrowBottomContour.y, rightEyebrowBottomCountours.get(i + 1).x, rightEyebrowBottomCountours.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(rightEyebrowBottomContour.x, rightEyebrowBottomContour.y, 4f, dotPaint);
List<PointF> leftEyeContours = face.getContour(
FaceContour.LEFT_EYE).getPoints();
for (int i = 0; i < leftEyeContours.size(); i++)
PointF leftEyeContour = leftEyeContours.get(i);
if (i != (leftEyeContours.size() - 1))
canvas.drawLine(leftEyeContour.x, leftEyeContour.y, leftEyeContours.get(i + 1).x, leftEyeContours.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(leftEyeContour.x, leftEyeContour.y, 4f, dotPaint);
List<PointF> rightEyeContours = face.getContour(
FaceContour.RIGHT_EYE).getPoints();
for (int i = 0; i < rightEyeContours.size(); i++)
PointF rightEyeContour = rightEyeContours.get(i);
if (i != (rightEyeContours.size() - 1))
canvas.drawLine(rightEyeContour.x, rightEyeContour.y, rightEyeContours.get(i + 1).x, rightEyeContours.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(rightEyeContour.x, rightEyeContour.y, 4f, dotPaint);
List<PointF> upperLipTopContour = face.getContour(
FaceContour.UPPER_LIP_TOP).getPoints();
for (int i = 0; i < upperLipTopContour.size(); i++)
PointF upperLipContour = upperLipTopContour.get(i);
if (i != (upperLipTopContour.size() - 1))
canvas.drawLine(upperLipContour.x, upperLipContour.y,
upperLipTopContour.get(i + 1).x,
upperLipTopContour.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(upperLipContour.x, upperLipContour.y, 4f, dotPaint);
List<PointF> upperLipBottomContour = face.getContour(
FaceContour.UPPER_LIP_BOTTOM).getPoints();
for (int i = 0; i < upperLipBottomContour.size(); i++)
PointF upBottom = upperLipBottomContour.get(i);
if (i != (upperLipBottomContour.size() - 1))
canvas.drawLine(upBottom.x, upBottom.y, upperLipBottomContour.get(i + 1).x, upperLipBottomContour.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(upBottom.x, upBottom.y, 4f, dotPaint);
List<PointF> lowerLipTopContour = face.getContour(
FaceContour.LOWER_LIP_TOP).getPoints();
for (int i = 0; i < lowerLipTopContour.size(); i++)
PointF lowerTop = lowerLipTopContour.get(i);
if (i != (lowerLipTopContour.size() - 1))
canvas.drawLine(lowerTop.x, lowerTop.y, lowerLipTopContour.get(i + 1).x, lowerLipTopContour.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(lowerTop.x, lowerTop.y, 4f, dotPaint);
List<PointF> lowerLipBottomContour = face.getContour(
FaceContour.LOWER_LIP_BOTTOM).getPoints();
for (int i = 0; i < lowerLipBottomContour.size(); i++)
PointF lowerBottom = lowerLipBottomContour.get(i);
if (i != (lowerLipBottomContour.size() - 1))
canvas.drawLine(lowerBottom.x, lowerBottom.y, lowerLipBottomContour.get(i + 1).x, lowerLipBottomContour.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(lowerBottom.x, lowerBottom.y, 4f, dotPaint);
List<PointF> noseBridgeContours = face.getContour(
FaceContour.NOSE_BRIDGE).getPoints();
for (int i = 0; i < noseBridgeContours.size(); i++)
PointF noseBridge = noseBridgeContours.get(i);
if (i != (noseBridgeContours.size() - 1))
canvas.drawLine(noseBridge.x, noseBridge.y, noseBridgeContours.get(i + 1).x, noseBridgeContours.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(noseBridge.x, noseBridge.y, 4f, dotPaint);
List<PointF> noseBottomContours = face.getContour(
FaceContour.NOSE_BOTTOM).getPoints();
for (int i = 0; i < noseBottomContours.size(); i++)
PointF noseBottom = noseBottomContours.get(i);
if (i != (noseBottomContours.size() - 1))
canvas.drawLine(noseBottom.x, noseBottom.y, noseBottomContours.get(i + 1).x, noseBottomContours.get(i + 1).y, linePaint);
else
return;
canvas.drawCircle(noseBottom.x, noseBottom.y, 4f, dotPaint);
//facing front flip image
if (cameraFacing == Facing.FRONT)
//Flip image!
Matrix matrix = new Matrix();
matrix.preScale(-1f, 1f);
Bitmap flippedBitmap = Bitmap.createBitmap(bitmap, 0, 0,
bitmap.getWidth(), bitmap.getHeight(),
matrix, true);
imageView.setImageBitmap(flippedBitmap);
else
imageView.setImageBitmap(bitmap);
//end outer loop
canvas.save();
).addOnFailureListener(new OnFailureListener()
@Override
public void onFailure (@NonNull Exception e)
imageView.setImageBitmap(null);
);
编辑:我收到此错误
Getting this error now: 2021-04-27 19:12:05.335 538-1065/system_process E/JavaBinder: *** Uncaught remote exception! (Exceptions are not yet supported across processes.)
java.lang.RuntimeException: android.os.RemoteException: Couldn't get ApplicationInfo for package android.frameworks.sensorservice@1.0::ISensorManager
at android.os.Parcel.writeException(Parcel.java:2158)
at android.os.Binder.execTransactInternal(Binder.java:1178)
at android.os.Binder.execTransact(Binder.java:1123)
Caused by: android.os.RemoteException: Couldn't get ApplicationInfo for package android.frameworks.sensorservice@1.0::ISensorManager
at com.android.server.pm.PackageManagerService$PackageManagerNative.getTargetSdkVersionForPackage(PackageManagerService.java:23957)
at android.content.pm.IPackageManagerNative$Stub.onTransact(IPackageManagerNative.java:255)
at android.os.Binder.execTransactInternal(Binder.java:1159)
at android.os.Binder.execTransact(Binder.java:1123)
非常感谢!!
【问题讨论】:
您遇到的错误是什么? 嗨,我得到的错误是:2021-04-25 22:19:55.448 17489-19142/? W/android.os.Debug: failed to get memory consumption info: -1
,我猜这是因为我使用了几个位图。但是,我需要这些位图用于翻转图像、检测照片中的特征以及实时人脸检测。知道如何避免这个错误吗?谢谢。
这只是一个警告。请发布您的完整 logcat 问题。
嗨,我没有任何错误。我的应用程序没有崩溃,只是实时人脸检测功能不起作用。照片的人脸检测工作正常。在 logcat 的错误下我发现了这个:2021-04-25 23:02:07.186 19552-19552/? E/sh.facedetecto: Unknown bits set in runtime_flags: 0x8000 2021-04-25 23:02:08.165 19552-19602/com.krish.facedetector E/EGL_emulation: eglQueryContext 32c0 EGL_BAD_ATTRIBUTE 2021-04-25 23:02:08.165 19552-19602/com.krish.facedetector E/EGL_emulation: tid 19602: eglQueryContext(1902): error 0x3004 (EGL_BAD_ATTRIBUTE)
另外,我将其添加到问题中。谢谢。
【参考方案1】:
如果您想通过 MLKit 使用相机流输出,您可以使用 CameraX ImageAnalysis 用例。生成YUV_420_888格式的android.media.Image,可以直接转换成mlkit InputImage。
或者,您也可以使用 ML Kit 刚刚发布的 CameraXSource 库。示例代码为here。这消除了您设置 camerax 用例的样板代码,并在内部从 cameraX 输出为您创建 MLKit 输入。请注意,这仍然是一个测试版 SDK。我们期待您的反馈。
为了使用 API,您需要在您的应用中添加以下依赖项:
实现 'com.google.mlkit:camera:16.0.0-beta1'
【讨论】:
您好,我添加了依赖项,但仍然无法正常工作。谢谢。 我把新的错误放在我原来的帖子里。这似乎是一个线程错误:***.com/questions/24429174/…。虽然不知道如何解决它 您好,您不应该在主线程之外进行任何 ui 修改。您可以改用:myAcitity.runOnUiThread(new Runnable() public void run() //在此处更新您的 UI 元素 ); 请更具体。我应该在其中输入哪段代码?谢谢。以上是关于为啥 Google 的 ML 人脸检测工具包在 .process() 上崩溃的主要内容,如果未能解决你的问题,请参考以下文章
使用 Google Face Detection ML Kit 在现有照片周围绘制一个框
MLKit 是一个强大易用的工具包。通过 ML Kit 您可以很轻松的实现文字识别条码识别图像标记人脸检测对象检测等功能
MLKit 是一个强大易用的工具包。通过 ML Kit 您可以很轻松的实现文字识别条码识别图像标记人脸检测对象检测等功能