如何使用 react-native-camera 人脸检测器检测人脸?

Posted

技术标签:

【中文标题】如何使用 react-native-camera 人脸检测器检测人脸?【英文标题】:how to detect a face with react-native-camera facedetector? 【发布时间】:2020-06-28 22:51:30 【问题描述】:

我正在尝试使用react-native-camera 检测面部,我想知道我们如何检测个人的面部,没有关于mlkit 的适当文档。

await FaceDetector.detectFacesAsync(data.uri) 这个语句只是返回像face[0] = bounds: origin: x: 739, y: 987 , size: x: 806, y: 789 , faceID: 0, rollAngle: 10.533509254455566, yawAngle: 0.7682874798774719 这样的人脸对象。

这只是对象的位置,我无法弄清楚如何使用 FaceDetector 识别个人的面部特征,如眼睛、鼻子,假设我将保存人 A 的面部数据,然后我将如何使用 react-native 将数据与 A 的面部匹配-相机?

【问题讨论】:

【参考方案1】:

ML Kit 不支持人脸识别。此外,React Native 尚未得到官方支持,但您可以查看https://rnfirebase.io/ml-vision/face-detection#process,它概述了如何获得 133 点的面部轮廓。但是,这不是用于面部识别,而是用于覆盖(例如面具、过滤器)。

【讨论】:

【参考方案2】:

React 原生相机有一个人脸检测道具,可用于人脸检测

[

【讨论】:

【参考方案3】:

此方法是您将在 react-native-camera 检测到的 Props 中使用的方法

【讨论】:

【参考方案4】:

import SplashScreen from 'react-native-splash-screen'
import React,  useEffect, createRef,useState  from 'react';
import  SafeAreaView, View, Image, StyleSheet, Text, Modal, TouchableOpacity  from 'react-native';
import  RNCamera  from 'react-native-camera';


const Test = (props) => 

  useEffect(() => 
    SplashScreen.hide();
  );


  const [faces, setFace] = useState([]);
  const [faceavl, setFaceavl] = useState(false);
  const [takeTimeFaceAvl, setTakeTimeFaceAvl] = useState(false);
  const [searchWaiting, setsearchWaiting] = useState(null)
  const [modalVisible, setModalVisible] = useState(false);
  const [image, setImage] = useState(null);


  const mycamera = createRef()


  const PendingView = () => (
    <View
      style=
        flex: 1,
        backgroundColor: 'lightgreen',
        justifyContent: 'center',
        alignItems: 'center',
      
    >
      <Text>Waiting</Text>
    </View>
  );


  const renderFaces = () => (
    <View style=
      position: 'absolute',
      bottom: 0,
      right: 0,
      left: 0,
      top: 0,
     pointerEvents="none">
      faces.map(renderFace)
    </View>
  );

  const renderFace = ( bounds, faceID, rollAngle, yawAngle ) => (
    <View
      key=faceID
      transform=[
         perspective: 600 ,
         rotateZ: `$rollAngle.toFixed(0)deg` ,
         rotateY: `$yawAngle.toFixed(0)deg` ,
      ]
      style=[
        
          padding: 10,
          borderWidth: 1,
          borderRadius: 2,
          position: 'absolute',
          borderColor: '#000',
          justifyContent: 'center',
        ,
        
          ...bounds.size,
          left: bounds.origin.x,
          top: bounds.origin.y,
        ,
      ]
    >

    </View>
  );


  return (
    <>
      <SafeAreaView style=styles.container>

        <RNCamera
          ref=mycamera

          style=styles.preview
          type=RNCamera.Constants.Type.front
          flashMode=RNCamera.Constants.FlashMode.on
          androidCameraPermissionOptions=
            title: 'Permission to use camera',
            message: 'We need your permission to use your camera',
            buttonPositive: 'Ok',
            buttonNegative: 'Cancel',
          
          androidRecordAudioPermissionOptions=
            title: 'Permission to use audio recording',
            message: 'We need your permission to use your audio',
            buttonPositive: 'Ok',
            buttonNegative: 'Cancel',
          

          onFacesDetected=(data) => 
            setFace(data.faces)
            setFaceavl(true);
            clearTimeout(searchWaiting)
            const avc = setTimeout(() => 
              console.log()
              setFaceavl(false);
              setFace([])
            , 500)
            setsearchWaiting(avc)
          
          onFaceDetectionError=(error) => 
            console.log('face--detact-->', error)
          


        >
          ( camera, status, recordAudioPermissionStatus ) => 
            if (status !== 'READY') return <PendingView />;
            return (
              <View style= flex: 0, flexDirection: 'row', justifyContent: 'center' >
                <TouchableOpacity onPress=async () => 
                  const options =  quality: 0.5, base64: true ;
                  const data = await camera.takePictureAsync(options)
                  if (faceavl) 
                    setTakeTimeFaceAvl(true)
                   else 
                    setTakeTimeFaceAvl(false)
                  
                  console.log(data.uri)
                  setImage(data)
                  setModalVisible(!modalVisible)
                 style=styles.capture>
                  <Text style= fontSize: 14 > SNAP </Text>
                </TouchableOpacity>
              </View>
            );
          

        </RNCamera>
        faces ? renderFaces() : null
      </SafeAreaView>


      <Modal
        animationType="slide"
        transparent=true
        visible=modalVisible
        onRequestClose=() => 
          Alert.alert("Modal has been closed.");
          setModalVisible(!modalVisible);
        
      >
        <View style=styles.centeredView>
          <View style=styles.modalView>
            takeTimeFaceAvl ? image ? <Image
              style=
                width: 200,
                height: 100,
              
              source=
                uri: image.uri,
              
            /> : null : <Text>Face not found</Text>
            <TouchableOpacity
              style=[styles.button, styles.buttonClose]
              onPress=() => setModalVisible(!modalVisible)
            >
              <Text style=styles.textStyle>Hide Modal</Text>
            </TouchableOpacity>
          </View>
        </View>
      </Modal>

    </>
  );
const styles = StyleSheet.create(
  container: 
    flex: 1,
    flexDirection: 'column',
    backgroundColor: 'black',
  ,
  item: 
    backgroundColor: '#FFF',
  ,
  viewOne: 
    flexDirection: 'row'
  ,
  viewTwo: 
    alignItems: 'flex-end', marginEnd: 9
  ,
  title: 
    fontSize: 16, // Semibold #000000
    color: '#000000',
  ,
  noOff: 
    color: '#D65D35',
    fontSize: 20,  // Semibold
  , product: 
    color: '#A6A6A6',
    fontSize: 16,  // Regular
  , titleView: 
    flex: 1,
    alignSelf: 'center',
    marginStart: 14,
    marginEnd: 14,
  ,
  centeredView: 
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    marginTop: 22
  ,
  modalView: 
    margin: 20,
    backgroundColor: "white",
    borderRadius: 20,
    padding: 10,
    alignItems: "center",
    shadowColor: "#000",
    shadowOffset: 
      width: 0,
      height: 2
    ,
    shadowOpacity: 0.25,
    shadowRadius: 4,
    elevation: 5
  ,
  button: 
    borderRadius: 20,
    padding: 10,
    elevation: 2
  ,
  buttonOpen: 
    backgroundColor: "#F194FF",
  ,
  buttonClose: 
    backgroundColor: "#2196F3",
  ,
  textStyle: 
    color: "white",
    fontWeight: "bold",
    textAlign: "center"
  ,
  modalText: 
    marginBottom: 15,
    textAlign: "center"
  ,

  preview: 
    flex: 1,
    justifyContent: 'flex-end',
    alignItems: 'center',
  ,
  capture: 
    flex: 0,
    backgroundColor: '#fff',
    borderRadius: 5,
    padding: 15,
    paddingHorizontal: 20,
    alignSelf: 'center',
    margin: 20,
  ,
);

【讨论】:

以上是关于如何使用 react-native-camera 人脸检测器检测人脸?的主要内容,如果未能解决你的问题,请参考以下文章

如何使用 react-native-camera 录制视频

如何使用 react-native-camera 人脸检测器检测人脸?

使用 react-native-camera,如何访问保存的图片?

如何将捕获的图像与 react-native-camera 一起使用

如何使用 react-native-camera 从相机胶卷 url 显示图像?

如何使用带有 react-native-camera 的默认 iOS 相机控件