使用 OpenCV 和 AVFoundation 框架的 iPhone 实时图像处理?

Posted

技术标签:

【中文标题】使用 OpenCV 和 AVFoundation 框架的 iPhone 实时图像处理?【英文标题】:iPhone Realtime Image Processing using OpenCV and AVFoundation Frameworks? 【发布时间】:2011-02-08 15:39:24 【问题描述】:

我想使用openCV实时进行图像处理。

我的最终目标是在屏幕上实时显示结果,而另一侧的摄像头正在使用 AVFoundation 框架捕获视频。

如何通过 OpenCV 处理每一帧视频,并将结果实时显示在屏幕上?

【问题讨论】:

就目前而言,这个问题太宽泛了,无法回答。您在流程的哪一部分需要帮助:使用 AVFoundation 抓取视频帧、在 iPhone 上编译和使用 OpenCV,或显示叠加图像?你想用 OpenCV 识别什么? 【参考方案1】:

使用 AVAssertReader

//Setup Reader
   AVURLAsset * asset = [AVURLAsset URLAssetWithURL:urlvalue options:nil]; 
    [asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^ dispatch_async(dispatch_get_main_queue(), ^
        AVAssetTrack * videoTrack = nil; 
        NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
        if ([tracks count] == 1) 
            videoTrack = [tracks objectAtIndex:0];
            NSError * error = nil; 
            _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; 
            if (error) 
                NSLog(error.localizedDescription); 
            NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
            NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_4444AYpCbCr16]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
            [_movieReader addOutput:[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings]]; 
             [_movieReader startReading];

         
    ); 
    ];

获取下一帧

static int frameCount=0;
- (void) readNextMovieFrame  

    if (_movieReader.status == AVAssetReaderStatusReading)  

        AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0]; 
        CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
        if (sampleBuffer) 
            CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
            // Lock the image buffer 
            CVPixelBufferLockBaseAddress(imageBuffer,0);
            // Get information of the image 
            uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
            size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
            size_t width = CVPixelBufferGetWidth(imageBuffer); 
            size_t height = CVPixelBufferGetHeight(imageBuffer); 

            /*We unlock the  image buffer*/
            CVPixelBufferUnlockBaseAddress(imageBuffer,0);

            /*Create a CGImageRef from the CVImageBufferRef*/
             CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
            CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
            CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

            /*We release some components*/
            CGContextRelease(newContext); 
            CGColorSpaceRelease(colorSpace);

            /*We display the result on the custom layer*/
            /*self.customLayer.contents = (id) newImage;*/

            /*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
            UIImage *image= [UIImage imageWithCGImage:newImage scale:0.0 orientation:UIImageOrientationRight];
            UIGraphicsBeginImageContext(image.size);

            [image drawAtPoint:CGPointMake(0, 0)];

           // UIImage *img=UIGraphicsGetImageFromCurrentImageContext();
            videoImage=UIGraphicsGetImageFromCurrentImageContext();

            UIGraphicsEndImageContext();


//videoImage=image;

          //  if (frameCount < 40) 
                NSLog(@"readNextMovieFrame==%d",frameCount);
                      NSString* filename = [NSString stringWithFormat:@"Documents/frame_%d.png", frameCount];
                      NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];
                     [UIImagePNGRepresentation(videoImage) writeToFile: pngPath atomically: YES];
                     frameCount++;
        //        

            CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
            CFRelease(sampleBuffer); 
         
     

一旦您的 _movieReader 结束,您需要重新启动。

【讨论】:

以上是关于使用 OpenCV 和 AVFoundation 框架的 iPhone 实时图像处理?的主要内容,如果未能解决你的问题,请参考以下文章

使用 AVFoundation 混合图像和视频

AVFoundation实现相机和使用ALAssetsLibrary

使用 AVFoundation 和 Swift 访问多个音频硬件输出/通道

iOS使用AVFoundation实现二维码扫描

如何使用 AVFoundation 为您的视频添加不同图像和不同 CMTimes 的水印

AVFoundation自己定义音视频频播放