使用 Vision 和 AVFoundation 框架从实时摄像头(而非静态图像)进行实时人脸检测
Posted
技术标签:
【中文标题】使用 Vision 和 AVFoundation 框架从实时摄像头(而非静态图像)进行实时人脸检测【英文标题】:Real time face detect from live camera (not from static image) using Vision & AVFoundation Framework 【发布时间】:2019-08-25 11:24:33 【问题描述】:我需要从 iPhone 前置摄像头检测真人脸。所以我使用了视觉框架来实现它。但它也不需要从静态图像(人体照片)中检测人脸。这是我的代码 sn-p。
class ViewController
func sessionPrepare()
session = AVCaptureSession()
guard let session = session, let captureDevice = frontCamera else return
do
let deviceInput = try AVCaptureDeviceInput(device: captureDevice)
session.beginConfiguration()
if session.canAddInput(deviceInput)
session.addInput(deviceInput)
let output = AVCaptureVideoDataOutput()
output.videoSettings = [
String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
]
output.alwaysDiscardsLateVideoFrames = true
if session.canAddOutput(output)
session.addOutput(output)
session.commitConfiguration()
let queue = DispatchQueue(label: "output.queue")
output.setSampleBufferDelegate(self, queue: queue)
print("setup delegate")
catch
print("can't setup session")
如果我将它放在相机前面,它也会从静态图像中检测人脸。
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)
let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments as! [String : Any]?)
let ciImageWithOrientation = ciImage.applyingOrientation(Int32(UIImageOrientation.leftMirrored.rawValue))
detectFace(on: ciImageWithOrientation)
func detectFace(on image: CIImage)
try? faceDetectionRequest.perform([faceDetection], on: image)
if let results = faceDetection.results as? [VNFaceObservation]
if !results.isEmpty
faceLandmarks.inputFaceObservations = results
detectLandmarks(on: image)
DispatchQueue.main.async
self.shapeLayer.sublayers?.removeAll()
func detectLandmarks(on image: CIImage)
try? faceLandmarksDetectionRequest.perform([faceLandmarks], on: image)
if let landmarksResults = faceLandmarks.results as? [VNFaceObservation]
for observation in landmarksResults
DispatchQueue.main.async
if let boundingBox = self.faceLandmarks.inputFaceObservations?.first?.boundingBox
let faceBoundingBox = boundingBox.scaled(to: self.view.bounds.size)
//different types of landmarks
let faceContour = observation.landmarks?.faceContour
let leftEye = observation.landmarks?.leftEye
let rightEye = observation.landmarks?.rightEye
let nose = observation.landmarks?.nose
let lips = observation.landmarks?.innerLips
let leftEyebrow = observation.landmarks?.leftEyebrow
let rightEyebrow = observation.landmarks?.rightEyebrow
let noseCrest = observation.landmarks?.noseCrest
let outerLips = observation.landmarks?.outerLips
那么有没有办法只使用实时摄像头检测来完成它?非常感谢您的帮助和建议
【问题讨论】:
这不是必需的答案。 如果你想从相机中识别活人脸,那就去 ARKit 框架。 但 ARKit 仅限于特定设备。它不适用于 iPhone 6、6+。 @RakeshDipuna 你找到解决办法了吗? 同样的问题,有人找到解决办法了吗? 【参考方案1】:我需要做同样的事情,经过大量的实验,我终于找到了这个 https://github.com/syaringan357/ios-MobileFaceNet-MTCNN-FaceAntiSpoofing
它仅检测实时摄像头面部。但它没有使用 Vision 框架。
【讨论】:
中国语言很难懂,你有没有找到其他的库? 我用过这个 TensorFlowLiteObjC @famfamfam 听起来不错,我使用的是来自 google 的带有 mlkit 的 lib,然后它也可以工作了以上是关于使用 Vision 和 AVFoundation 框架从实时摄像头(而非静态图像)进行实时人脸检测的主要内容,如果未能解决你的问题,请参考以下文章
AVFoundation实现相机和使用ALAssetsLibrary
使用 AVFoundation 和 Swift 访问多个音频硬件输出/通道
如何使用 AVFoundation 为您的视频添加不同图像和不同 CMTimes 的水印