iOS:在相机预览期间捕获图像而不采取行动

Posted

技术标签:

【中文标题】iOS:在相机预览期间捕获图像而不采取行动【英文标题】:iOS: Capture image during camera preview without action 【发布时间】:2022-01-13 00:14:36 【问题描述】:

我正在尝试从相机预览中捕获图像,但无法从预览层获取图像。我想做的有点类似于照片应用程序中的 ios 15 OCR 模式,它在相机预览期间处理图像,不需要用户拍摄或开始录制视频,只需在预览中处理图像。我查看了文档并在网上搜索,但找不到任何有用的信息。

我尝试的是,保存 previewLayer 并定期调用 previewLayer.draw(in: context) 。但是在上下文中绘制的图像是空白的。现在我想知道这是否可能首先。 可能存在一些安全问题,以限制在相机预览中处理图像,我猜只有正版应用程序才能访问,所以我可能需要找到其他方法。

如果有任何解决方法,请赐教。

谢谢!

【问题讨论】:

How to process images real-time from the iOS camera 可能有帮助 嗨,疯狂程序员。谢谢你的参考。有效!你救了我的命!刚刚在我的代码中添加了 videoSetting() 和 captureOutput 委托部分,定期获取实时捕获图像。再次感谢!! 随意添加您的答案,其他人可能会觉得它有用;) 当然。等我清理完以后再发sn-p。 【参考方案1】:

好的。在 MadProgrammer 的帮助下,我得到了正常工作。 Anurag Ajwani's site 很有帮助。

这是我用来捕捉视频帧的简单 sn-p。您需要在 CameraView 实例化之前确保权限。

class VideoCapture: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate 
    //private var previewLayer: AVCaptureVideoPreviewLayer? = nil
    private var session: AVCaptureSession? = nil
    private var videoOutput: AVCaptureVideoDataOutput? = nil

    private var videoHandler: ((UIImage) -> Void)?

    override init() 
        super.init()
        
        let deviceSession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualWideCamera, .builtInWideAngleCamera], mediaType: .video, position: .back)
        guard deviceSession.devices.count > 0 else  return 

        if let input = try? AVCaptureDeviceInput(device: deviceSession.devices.first!) 
            let session = AVCaptureSession()
            session.addInput(input)

            let videoOutput = AVCaptureVideoDataOutput()
            videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString): NSNumber(value: kCVPixelFormatType_32BGRA)] as [String:Any]
            videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "my.image.handling.queue"))
            videoOutput.alwaysDiscardsLateVideoFrames = true
            if session.canAddOutput(videoOutput) 
                session.sessionPreset = .high
                session.addOutput(videoOutput)
                self.videoOutput = videoOutput
            

            for connection in videoOutput.connections 
                if connection.isVideoOrientationSupported 
                    connection.videoOrientation = .portrait
                
            

            session.commitConfiguration()

            self.session = session

            /*
            self.previewLayer = AVCaptureVideoPreviewLayer(session: session)
            if let previewLayer = self.previewLayer 
                previewLayer.videoGravity = .resizeAspectFill
                layer.insertSublayer(previewLayer, at: 0)
                CameraPreviewView.initialized = true
            
             */
       
    

    func startCapturing(_ videoHandler: @escaping (UIImage) -> Void) -> Void 
        if let session = session 
            session.startRunning()
        
        self.videoHandler = videoHandler
    

    // AVCaptureVideoDataOutputSampleBufferDelegate
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) 
        guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else 
            debugPrint("unable to get video frame")
            return
        
        //print("got video frame")
        if let videoHandler = self.videoHandler 
            let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(imageBuffer), height: CVPixelBufferGetHeight(imageBuffer))
            let ciImage = CIImage.init(cvImageBuffer: imageBuffer)
            let ciContext = CIContext()
            let cgImage = ciContext.createCGImage(ciImage, from: rect)
            guard cgImage != nil else return 
            let uiImage = UIImage(cgImage: cgImage!)
            videoHandler(uiImage)
        
    


struct CameraView: View 
    @State var capturedVideo: UIImage? = nil

    let videoCapture = VideoCapture()

   var body: some View 
       VStack 
           ZStack(alignment: .center) 
               if let capturedVideo = self.capturedVideo 
                   Image(uiImage: capturedVideo)
                       .resizable()
                       .scaledToFill()
               
           
       
       .background(Color.black)
       .onAppear 
           self.videoCapture.startCapturing  uiImage in
               self.capturedVideo = uiImage
           
       
   

【讨论】:

以上是关于iOS:在相机预览期间捕获图像而不采取行动的主要内容,如果未能解决你的问题,请参考以下文章

Android SDK:获取原始预览相机图像而不显示它

将图像视图定位为相机预览上的叠加图像

如何在没有从服务或线程预览的情况下进行相机捕捉?

Camera2 API预览方面已损坏

用于相机预览的Android单元测试?

AVFoundation 图像方向在预览中偏离了 90 度,但在相机胶卷中很好