AvCapture / AVCaptureVideoPreviewLayer 无法获取正确的可见图像

Posted

技术标签:

【中文标题】AvCapture / AVCaptureVideoPreviewLayer 无法获取正确的可见图像【英文标题】:AvCapture / AVCaptureVideoPreviewLayer troubles getting the correct visible image 【发布时间】:2015-11-11 10:07:16 【问题描述】:

我目前在从 AVCaptureAVCaptureVideoPreviewLayer 等获取我想要的东西时遇到了一些巨大的麻烦。

我目前正在创建一个应用程序(适用于 Iphone 设备,但如果它也可以在 ipad 上运行会更好)我想在我的视图中间放置一个小的相机预览如图所示:

为此,我想保持相机的比例,所以我使用了这个配置:

rgbaImage = nil;

NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device = [possibleDevices firstObject];
if (!device) return;

AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.captureSession = session;
self.captureDevice = device;

NSError *error = nil;
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if( !input )

    [[[UIAlertView alloc] initWithTitle:NSLocalizedString(@"NoCameraAuthorizationTitle", nil)
                                message:NSLocalizedString(@"NoCameraAuthorizationMsg", nil)
                               delegate:self
                      cancelButtonTitle:NSLocalizedString(@"OK", nil)
                      otherButtonTitles:nil] show];
    return;


[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
[session addInput:input];

AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:@(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)];

[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:dataOutput];

self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:self.stillImageOutput];

connection = [dataOutput.connections firstObject];
[self setupCameraOrientation];

NSError *errorLock;
if ([device lockForConfiguration:&errorLock])

//        Frame rate
    device.activeVideoMinFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
    device.activeVideoMaxFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);

    AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;
    AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;

    CGPoint point = CGPointMake(0.5, 0.5);
    if ([device isAutoFocusRangeRestrictionSupported])
    
        device.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
    
    if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:focusMode])
    
        [device setFocusPointOfInterest:point];
        [device setFocusMode:focusMode];
    
    if ([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:exposureMode])
    
        [device setExposurePointOfInterest:point];
        [device setExposureMode:exposureMode];
    
    if ([device isLowLightBoostSupported])
    
        device.automaticallyEnablesLowLightBoostWhenAvailable = YES;
    
    [device unlockForConfiguration];


if (device.isFlashAvailable)

    [device lockForConfiguration:nil];
    [device setFlashMode:AVCaptureFlashModeOff];
    [device unlockForConfiguration];

    if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
    
        [device lockForConfiguration:nil];
        [device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
        [device unlockForConfiguration];
    


previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.layer insertSublayer:previewLayer atIndex:0];

[session commitConfiguration];

如您所见,我正在使用 AVLayerVideoGravityResizeAspectFill 属性来确保我确实有正确的比率。

我的麻烦从这里开始,因为我尝试了很多事情,但从未真正成功。 我的目标是获得与用户可以在 previewLayer 中看到的图片等效的图片。知道视频帧提供的图像比您在预览中看到的图像更大。

我尝试了 3 种方法:

1) 使用个人计算:由于我知道视频帧大小和屏幕大小以及层大小和位置,我尝试计算比率并使用它来计算在视频帧。我实际上发现视频帧(sampleBuffer)以像素为单位,而我从 mainScreen 边界获得的位置是苹果度量,并且必须以一个比率乘以一个比率才能以像素为单位,这是我的比率,假设视频帧大小是实际设备全屏尺寸。

--> 这实际上在我的 IPAD 上给了我一个非常好的结果,高度和宽度都很好,但是 (x,y) 原点与原来的有点移动......(细节:实际上如果我删除 72像素从我发现我得到良好输出的位置)

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection

if (self.forceStop) return;
if (_isStopped || _isCapturing || !CMSampleBufferIsValid(sampleBuffer)) return;

CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CGRect rect = image.extent;

    CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width/* * [UIScreen mainScreen].scale*/;
CGFloat screenHeight = screenRect.size.height/* * [UIScreen mainScreen].scale*/;
NSLog(@"%f, %f ---",screenWidth, screenHeight);

float myRatio = ( rect.size.height / screenHeight );
float myRatioW = ( rect.size.width / screenWidth );
NSLog(@"Ratio w :%f h:%f ---",myRatioW, myRatio);


CGPoint p = [captureViewControler.view convertPoint:previewLayer.frame.origin toView:nil];
NSLog(@"-Av-> %f, %f --> %f, %f", p.x, p.y, self.bounds.size.height, self.bounds.size.width);
rect.origin = CGPointMake(p.x * myRatioW, p.y  * myRatio);

NSLog(@"%f, %f ----> %f %f", rect.origin.x, rect.origin.y, rect.size.width, rect.size.height);
NSLog(@"%f",  previewLayer.frame.size.height * (  rect.size.height / screenHeight  ));
rect.size = CGSizeMake(rect.size.width, previewLayer.frame.size.height * myRatio);

image = [image imageByCroppingToRect:rect];
its = [ImageUtils cropImageToRect:uiImage(sampleBuffer) toRect:rect];
NSLog(@"--------------------------------------------");
[captureViewControler sendToPreview:its];

2) 使用 StillImage 捕获 :只要我在 IPAD 上,这种方法实际上就可以工作。但真正的麻烦是我正在使用这些裁剪帧来提供图像库,并且方法 captureStillImageAsynchronouslyFromConnection 正在为图片调用系统声音(我读了很多关于“解决方案”的内容,比如调用另一个声音避免它等但不起作用,实际上并没有解决 iPhone 6 上的冻结问题)所以这种方法似乎不合适。

AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connect in self.stillImageOutput.connections)

    for (AVCaptureInputPort *port in [connect inputPorts])
    
        if ([[port mediaType] isEqual:AVMediaTypeVideo] )
        
            videoConnection = connect;
            break;
        
    
    if (videoConnection)  break; 


[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
                                                   completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
 
     if (error)
     
         NSLog(@"Take picture failed");
     
     else
     
         NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
         UIImage *takenImage = [UIImage imageWithData:jpegData];

         CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
         NSLog(@"image cropped : %@", NSStringFromCGRect(outputRect));
         CGImageRef takenCGImage = takenImage.CGImage;
         size_t width = CGImageGetWidth(takenCGImage);
         size_t height = CGImageGetHeight(takenCGImage);
         NSLog(@"Size cropped : w: %zu h: %zu", width, height);
         CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
         NSLog(@"final cropped : %@", NSStringFromCGRect(cropRect));

         CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
         takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
         CGImageRelease(cropCGImage);

         its = [ImageUtils rotateUIImage:takenImage];
         image = [[CIImage alloc] initWithImage:its];

3) 使用具有比率的 metadataOuput :这实际上根本不起作用,但我认为它对我的帮助最大,因为它在 stillImage 过程中使用它(使用 metadataOutputRectOfInterestForRect 结果得到倾倒百分比,然后将其与比率相结合)。我想使用它并添加图片之间的比率差异以获得正确的输出。

CGRect rect = image.extent;
CGSize size = CGSizeMake(1936.0, 2592.0);

float rh = (size.height / rect.size.height);
float rw = (size.width / rect.size.width);

CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(@"avant cropped : %@", NSStringFromCGRect(outputRect));
outputRect.origin.x = MIN(1.0, outputRect.origin.x * rw);
outputRect.origin.y = MIN(1.0, outputRect.origin.y * rh);
outputRect.size.width = MIN(1.0, outputRect.size.width * rw);
outputRect.size.height = MIN(1.0, outputRect.size.height * rh);
NSLog(@"final cropped : %@", NSStringFromCGRect(outputRect));

UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
NSLog(@"takenImage : %@", NSStringFromCGSize(takenImage.size));

CGImageRef takenCGImage = [[CIContext contextWithOptions:nil] createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(@"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];

我希望有人能够帮助我解决这个问题。 非常感谢。

【问题讨论】:

【参考方案1】:

我终于找到了使用此代码的解决方案。我的错误是尝试使用图像之间的比率,而不考虑 metadataOutputRectOfInterestForRect 返回一个不需要为新的其他图像更改的百分比值。

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
    
    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
    __block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];

    CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
    outputRect.origin.y = outputRect.origin.x;
    outputRect.origin.x = 0;
    outputRect.size.height = outputRect.size.width;
    outputRect.size.width = 1;

    UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
    CGImageRef takenCGImage = [cicontext createCGImage:image fromRect:[image extent]];
    size_t width = CGImageGetWidth(takenCGImage);
    size_t height = CGImageGetHeight(takenCGImage);
    CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
    CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
    UIImage *its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];

【讨论】:

以上是关于AvCapture / AVCaptureVideoPreviewLayer 无法获取正确的可见图像的主要内容,如果未能解决你的问题,请参考以下文章

avcapture video 在改变方向时重复部分

使用 avcapture 会话拉伸捕获图像

在 iPhone 上静音 AVCapture 快门声音 [重复]

iOS 上的 AVCapture 会话中没有音频

在 AVCapture Swift 中仅捕获相机预览

更新到 Xcode 9 / Swift 4 后 AVCapture 出现奇怪的编译错误 [重复]