实时人脸检测不起作用

Posted

技术标签:

【中文标题】实时人脸检测不起作用【英文标题】:Real time face detection is not working 【发布时间】:2017-05-05 11:03:11 【问题描述】:

此代码没有显示在相机中检测到人脸,即使没有错误。 我希望在相机中实时检测到脸部,周围有红色的乡绅,但我认为我没有正确放置代码或者我应该在 Viewdidload 或其他地方放置一些东西?

import UIKit
import CoreImage

class ViewController: UIViewController ,UIAlertViewDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate  

@IBOutlet var imageView: UIImageView!
@IBAction func Moodify(_ sender: UIButton) 


    func detect() 

        guard let personciImage = CIImage(image: imageView.image!) else 
            return
        

        let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
        let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
        let faces = faceDetector?.features(in: personciImage)


        // For converting the Core Image Coordinates to UIView Coordinates
        let ciImageSize = personciImage.extent.size
        var transform = CGAffineTransform(scaleX: 1, y: -1)
        transform = transform.translatedBy(x: 0, y: -ciImageSize.height)

        for face in faces as! [CIFaceFeature] 

            print("Found bounds are \(face.bounds)")

            // Apply the transform to convert the coordinates
            var faceViewBounds = face.bounds.applying(transform)

            // Calculate the actual position and size of the rectangle in the image view
            let viewSize = imageView.bounds.size
            let scale = min(viewSize.width / ciImageSize.width,
                            viewSize.height / ciImageSize.height)
            let offsetX = (viewSize.width - ciImageSize.width * scale) / 2
            let offsetY = (viewSize.height - ciImageSize.height * scale) / 2

            faceViewBounds = faceViewBounds.applying(CGAffineTransform(scaleX: scale, y: scale))
            faceViewBounds.origin.x += offsetX
            faceViewBounds.origin.y += offsetY

            let faceBox = UIView(frame: faceViewBounds)
            //let faceBox = UIView(frame: face.bounds)
            faceBox.layer.borderWidth = 3
            faceBox.layer.borderColor = UIColor.red.cgColor
            faceBox.backgroundColor = UIColor.clear
            imageView.addSubview(faceBox)

            if face.hasLeftEyePosition 
                print("Left eye bounds are \(face.leftEyePosition)")
            

            if face.hasRightEyePosition 
                print("Right eye bounds are \(face.rightEyePosition)")
            
        
    

    let picker = UIImagePickerController()
    picker.delegate = self
    picker.allowsEditing = true
    picker.sourceType = .camera
    picker.cameraDevice = .front
    self.present(picker, animated: true, completion:  _ in )

    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [AnyHashable: Any]) 
        let chosenImage = info[UIImagePickerControllerEditedImage]
        self.imageView!.image = chosenImage as? UIImage
        picker.dismiss(animated: true, completion:  _ in )
    

     // picker.dismiss(animated: true, completion:  _ in )
    func imagePickerControllerDidCancel(_ picker: UIImagePickerController) 
        picker.dismiss(animated: true, completion:  _ in )
    


override func viewDidLoad() 

    let alert = UIAlertController(title: "Ooops!!!", message: "Camera is not connected", preferredStyle: UIAlertControllerStyle.alert)
    alert.addAction(UIAlertAction(title: "Connect", style: UIAlertActionStyle.default, handler: nil))
    self.present(alert, animated: true, completion: nil)

    super.viewDidLoad()
    // Do any additional setup after loading the view, typically from a nib.



override func didReceiveMemoryWarning() 
    super.didReceiveMemoryWarning()
    // Dispose of any resources that can be recreated.


【问题讨论】:

能否分享一下您从哪里获取此代码的教程? @TungFam 这里是链接:appcoda.com/face-detection-core-image 我不确定您是否以正确的方式提问,但您说“此代码未显示在相机中检测到人脸”。根据教程,它应该在图像上显示人脸,而不是在相机中实时显示。 @TungFam 但我已经对实时摄像机进行了一些编辑,您可以运行此代码。此代码仅启动相机,但不进行任何检测。 您能否告诉我,如果-detect 方法嵌套在您的事件处理程序的主体中,您希望谁触发它?为什么你在-viewDidLoad 方法的主体中放置并尝试显示一个警报控制器?等等…… 【参考方案1】:

您很可能只需要按照文档中描述的方式触发函数

我们将调用 viewDidLoad 中的检测方法。所以在方法中插入如下代码行:

override func viewDidLoad() 
   super.viewDidLoad()

   detect()

编译并运行应用程序。

编辑: 这是解决方案,而函数“检测”是作为子类方法,但在您的情况下,您使用 IBAction,它具有类似这样的不同语法。您应该尝试删除函数名 detect() 和这个括号



let picker =

这部分必须在函数内部

let picker = UIImagePickerController()
picker.delegate = self
picker.allowsEditing = true
picker.sourceType = .camera
picker.cameraDevice = .front
self.present(picker, animated: true, completion:  _ in )

对于您的情况,您也可以省略这部分。

【讨论】:

@Solangi 您使用的 SWIFT 版本是什么?在旧版本之前需要使用 self.detect() 我正在使用 Swift 3 @Solangi 看起来像个谜。我已经做了 7 年的 ios 开发,但从未遇到过这种情况。我的意思是你做了一个子类,然后子类有一个方法,而从子类调用该方法时无法识别。 那么我应该怎么做,你会建议我做点什么吗? 感谢您的努力,但我仍然收到此错误 Use of unresolved identifier "detect" error 您能否用您在答案中编写的代码更新我的问题。【参考方案2】:

查看您的代码后,您甚至没有在拍完照片后致电detect()。我尝试按如下所述对其进行修复,但是,detect() 将返回我在Face Detection with Camera 中描述的零面。

lazy var picker: UIImagePickerController = 
    let picker = UIImagePickerController()
    picker.delegate = self
    picker.allowsEditing = true
    picker.sourceType = .camera
    picker.cameraDevice = .front
    return picker
()

@IBOutlet var imageView: UIImageView!
override func viewDidLoad() 
    super.viewDidLoad()
    imageView.contentMode = .scaleAspectFit


@IBAction func TakePhoto(_ sender: Any) 
    self.present(picker, animated: true, completion: nil)


// MARK: - UIImagePickerControllerDelegate
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) 
    if let chosenImage = info[UIImagePickerControllerOriginalImage] as? UIImage 
        self.imageView!.image = chosenImage
        // Got the image from camera, the imageView.image is not nil, so it's time for facial detection
        detect()
        picker.dismiss(animated: true, completion: nil)
    



func imagePickerControllerDidCancel(_ picker: UIImagePickerController) 
    picker.dismiss(animated: true, completion: nil)


// MARK: - Face Detection

func detect() 

    guard let personciImage = CIImage(image: imageView.image!) else 
        return
    

    let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
    let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
    let faces = faceDetector?.features(in: personciImage)


    // For converting the Core Image Coordinates to UIView Coordinates
    let ciImageSize = personciImage.extent.size
    var transform = CGAffineTransform(scaleX: 1, y: -1)
    transform = transform.translatedBy(x: 0, y: -ciImageSize.height)
    print("faces.count = \(faces?.count)")

    for face in faces as! [CIFaceFeature] 

        print("Found bounds are \(face.bounds)")

        // Apply the transform to convert the coordinates
        var faceViewBounds = face.bounds.applying(transform)

        // Calculate the actual position and size of the rectangle in the image view
        let viewSize = imageView.bounds.size
        let scale = min(viewSize.width / ciImageSize.width,
                        viewSize.height / ciImageSize.height)
        let offsetX = (viewSize.width - ciImageSize.width * scale) / 2
        let offsetY = (viewSize.height - ciImageSize.height * scale) / 2

        faceViewBounds = faceViewBounds.applying(CGAffineTransform(scaleX: scale, y: scale))
        faceViewBounds.origin.x += offsetX
        faceViewBounds.origin.y += offsetY

        let faceBox = UIView(frame: faceViewBounds)
        //let faceBox = UIView(frame: face.bounds)
        faceBox.layer.borderWidth = 3
        faceBox.layer.borderColor = UIColor.red.cgColor
        faceBox.backgroundColor = UIColor.clear
        imageView.addSubview(faceBox)

        if face.hasLeftEyePosition 
            print("Left eye bounds are \(face.leftEyePosition)")
        

        if face.hasRightEyePosition 
            print("Right eye bounds are \(face.rightEyePosition)")
        
    

【讨论】:

它还没有检测到!而且我还尝试在 viewdidload 中提及detect(),但它给出了错误致命错误展开可选当你更新答案时你会更新我的问题代码中的更改吗? viewDidLoad()中调用detect()会崩溃,因为开头没有设置imageView.image 你的按钮(Moodify)是干什么用的?就我而言,它只是一个按钮,用于显示用于快照的 pickerView。我拍照后,代理iimagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) 会跑,然后detect() 是的,Moodify 是我的按钮 我把它放在here

以上是关于实时人脸检测不起作用的主要内容,如果未能解决你的问题,请参考以下文章

Dlib 正面人脸检测在 IOS 中不起作用。?

如果我从横向模式单击快照,人脸检测算法将不起作用

FaceDetectorOptions.setMinFaceSize() 不起作用

ImageAnalysis 用例 CameraX 人脸检测

当我尝试上传带口罩的图像时,认知面部检测不起作用

DeleteAsync 方法在 Azure 人脸识别服务中不起作用