使用 AVFoundation 混合图像和视频

Posted

技术标签:

【中文标题】使用 AVFoundation 混合图像和视频【英文标题】:Mixing Images and Video using AVFoundation 【发布时间】:2014-10-20 22:19:28 【问题描述】:

我正在尝试将图像拼接到预先存在的视频中,以在 Mac 上使用 AVFoundation 创建新的视频文件。

到目前为止,我已经阅读了 Apple 文档示例,

ASSETWriterInput for making Video from UIImages on Iphone Issues

Mix video with static image in CALayer using AVVideoCompositionCoreAnimationTool

AVFoundation Tutorial: Adding Overlays and Animations to Videos 和其他一些 SO 链接

现在这些已被证明有时非常有用,但我的问题是我并没有创建静态水印或叠加层,我想在视频的各个部分之间放入图像。 到目前为止,我已经设法获取视频并为这些图像创建空白部分以插入和导出它。

我的问题是让图像自行插入这些空白部分。我认为可行的唯一方法是创建一系列动画层,以在正确的时间改变它们的不透明度,但我似乎无法让动画工作。

下面的代码是我用来创建视频片段和图层动画的代码。

    //https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_Editing.html#//apple_ref/doc/uid/TP40010188-CH8-SW7
    
    // let's start by making our video composition
    AVMutableComposition* mutableComposition = [AVMutableComposition composition];
    AVMutableCompositionTrack* mutableCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
    
    AVMutableVideoComposition* mutableVideoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:gVideoAsset];
    
    // if the first point's frame doesn't start on 0
    if (gFrames[0].startTime.value != 0)
    
        DebugLog("Inserting vid at 0");
        // then add the video track to the composition track with a time range from 0 to the first point's startTime
        [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, gFrames[0].startTime) ofTrack:gVideoTrack atTime:kCMTimeZero error:&gError];
        
    
    
    if(gError)
    
        DebugLog("Error inserting original video segment");
        GetError();
    
    
    // create our parent layer and video layer
    CALayer* parentLayer = [CALayer layer];
    CALayer* videoLayer = [CALayer layer];
    
    parentLayer.frame = CGRectMake(0, 0, 1280, 720);
    videoLayer.frame = CGRectMake(0, 0, 1280, 720);
    
    [parentLayer addSublayer:videoLayer];
    
    // create an offset value that should be added to each point where a new video segment should go
    CMTime timeOffset = CMTimeMake(0, 600);
    
    // loop through each additional frame
    for(int i = 0; i < gFrames.size(); i++)
    
    // create an animation layer and assign it's content to the CGImage of the frame
        CALayer* Frame = [CALayer layer];
        Frame.contents = (__bridge id)gFrames[i].frameImage;
        Frame.frame = CGRectMake(0, 720, 1280, -720);
        
        DebugLog("inserting empty time range");
        // add frame point to the composition track starting at the point's start time
        // insert an empty time range for the duration of the frame animation
        [mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];
        
        // update the time offset by the duration
        timeOffset = CMTimeAdd(timeOffset, gFrames[i].duration);
        
        // make the layer completely transparent
        Frame.opacity = 0.0f;
        
        // create an animation for setting opacity to 0 on start
        CABasicAnimation* frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;
        
        frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
        frameAnim.toValue = [NSNumber numberWithFloat:0.0];
        
        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero;
        frameAnim.speed = 1.0f;
        
        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];
        
        // create an animation for setting opacity to 1
        frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;
        
        frameAnim.fromValue = [NSNumber numberWithFloat:1.0];
        frameAnim.toValue = [NSNumber numberWithFloat:1.0];
        
        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].startTime);
        frameAnim.speed = 1.0f;
        
        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];
        
        // create an animation for setting opacity to 0
        frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;
        
        frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
        frameAnim.toValue = [NSNumber numberWithFloat:0.0];
        
        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].endTime);
        frameAnim.speed = 1.0f;
        
        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];
        
        // add the frame layer to our parent layer
        [parentLayer addSublayer:Frame];
        
        gError = nil;
        
        // if there's another point after this one
        if( i < gFrames.size()-1)
        
            // add our video file to the composition with a range of this point's end and the next point's start
            [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime,
                            CMTimeMake(gFrames[i+1].startTime.value - gFrames[i].startTime.value, 600))
                            ofTrack:gVideoTrack
                            atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];
            
        
        // else just add our video file with a range of this points end point and the videos duration
        else
        
            [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime, CMTimeSubtract(gVideoAsset.duration, gFrames[i].startTime)) ofTrack:gVideoTrack atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];
        
        
        if(gError)
        
            char errorMsg[256];
            sprintf(errorMsg, "Error inserting original video segment at: %d", i);
            DebugLog(errorMsg);
            GetError();
        
    

现在在该段中,帧的不透明度设置为 0.0f,但是当我将其设置为 1.0f 时,它所做的只是将这些帧中的最后一个帧在整个持续时间内放在视频顶部。

然后使用 AVAssetExportSession 导出视频,如下所示

mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
    
    // create a layer instruction for our newly created animation tool
    AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:gVideoTrack];
    
    AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    [instruction setTimeRange:CMTimeRangeMake(kCMTimeZero, [mutableComposition duration])];
    [layerInstruction setOpacity:1.0f atTime:kCMTimeZero];
    [layerInstruction setOpacity:0.0f atTime:mutableComposition.duration];
    instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
    
    // set the instructions on our videoComposition
    mutableVideoComposition.instructions = [NSArray arrayWithObject:instruction];
    
    // export final composition to a video file
    
    // convert the videopath into a url for our AVAssetWriter to create a file at
    NSString* vidPath = CreateNSString(outputVideoPath);
    NSURL* vidURL = [NSURL fileURLWithPath:vidPath];
    
    AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPreset1280x720];
    
    exporter.outputFileType = AVFileTypeMPEG4;
    
    exporter.outputURL = vidURL;
    exporter.videoComposition = mutableVideoComposition;
    exporter.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
    
    // Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
    [exporter exportAsynchronouslyWithCompletionHandler:^
        dispatch_async(dispatch_get_main_queue(), ^
            if (exporter.status == AVAssetExportSessionStatusCompleted)
            
                DebugLog("!!!file created!!!");
                _Close();
            
            else if(exporter.status == AVAssetExportSessionStatusFailed)
            
                DebugLog("failed damn");
                DebugLog(cStringCopy([[[exporter error] localizedDescription] UTF8String]));
                DebugLog(cStringCopy([[[exporter error] description] UTF8String]));
                _Close();
            
            else
            
                DebugLog("NoIdea");
                _Close();
            
        );
    ];
    
    

我感觉动画没有开始,但我不知道。我是否以正确的方式将图像数据拼接到这样的视频中?

任何帮助将不胜感激。

【问题讨论】:

【参考方案1】:

好吧,我以另一种方式解决了我的问题。动画路线不起作用,所以我的解决方案是将所有可插入的图像编译成一个临时视频文件,并使用该视频将图像插入到我的最终输出视频中。

从我最初发布的第一个链接ASSETWriterInput for making Video from UIImages on Iphone Issues开始,我创建了以下函数来创建我的临时视频

void CreateFrameImageVideo(NSString* path)

    NSLog(@"Creating writer at path %@", path);
    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
                                  [NSURL fileURLWithPath:path] fileType:AVFileTypeMPEG4
                                                              error:&error];

    NSLog(@"Creating video codec settings");
    NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   [NSNumber numberWithInt:gVideoTrack.estimatedDataRate/*128000*/], AVVideoAverageBitRateKey,
                                   [NSNumber numberWithInt:gVideoTrack.nominalFrameRate],AVVideoMaxKeyFrameIntervalKey,
                                   AVVideoProfileLevelH264MainAutoLevel, AVVideoProfileLevelKey,
                                   nil];

    NSLog(@"Creating video settings");
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   codecSettings,AVVideoCompressionPropertiesKey,
                                   [NSNumber numberWithInt:1280], AVVideoWidthKey,
                                   [NSNumber numberWithInt:720], AVVideoHeightKey,
                                   nil];

    NSLog(@"Creating writter input");
    AVAssetWriterInput* writerInput = [[AVAssetWriterInput
                                        assetWriterInputWithMediaType:AVMediaTypeVideo
                                        outputSettings:videoSettings] retain];

    NSLog(@"Creating adaptor");
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                     sourcePixelBufferAttributes:nil];

    [videoWriter addInput:writerInput];

    NSLog(@"Starting session");
    //Start a session:
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];


    CMTime timeOffset = kCMTimeZero;//CMTimeMake(0, 600);

    NSLog(@"Video Width %d, Height: %d, writing frame video to file", gWidth, gHeight);

    CVPixelBufferRef buffer;

    for(int i = 0; i< gAnalysisFrames.size(); i++)
    
        while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) 
            NSLog(@"Waiting inside a loop");
            NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
            [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
        

        //Write samples:
        buffer = pixelBufferFromCGImage(gAnalysisFrames[i].frameImage, gWidth, gHeight);

        [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];



        timeOffset = CMTimeAdd(timeOffset, gAnalysisFrames[i].duration);
    

    while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) 
        NSLog(@"Waiting outside a loop");
        NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
        [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
    

    buffer = pixelBufferFromCGImage(gAnalysisFrames[gAnalysisFrames.size()-1].frameImage, gWidth, gHeight);
    [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];

    NSLog(@"Finishing session");
    //Finish the session:
    [writerInput markAsFinished];
    [videoWriter endSessionAtSourceTime:timeOffset];
    BOOL successfulWrite = [videoWriter finishWriting];

    // if we failed to write the video
    if(!successfulWrite)
    

        NSLog(@"Session failed with error: %@", [[videoWriter error] description]);

        // delete the temporary file created
        NSFileManager *fileManager = [NSFileManager defaultManager];
        if ([fileManager fileExistsAtPath:path]) 
            NSError *error;
            if ([fileManager removeItemAtPath:path error:&error] == NO) 
                NSLog(@"removeItemAtPath %@ error:%@", path, error);
            
        
    
    else
    
        NSLog(@"Session complete");
    

    [writerInput release];


创建视频后,将其作为 AVAsset 加载并提取其轨道,然后通过替换以下行(来自原始帖子中的第一个代码块)插入视频

[mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];

与:

[mutableCompositionTrack insertTimeRange:CMTimeRangeMake(timeOffset,gAnalysisFrames[i].duration)
                                     ofTrack:gFramesTrack
                                     atTime:CMTimeAdd(gAnalysisFrames[i].startTime, timeOffset) error:&gError];

其中 gFramesTrack 是从临时帧视频创建的 AVAssetTrack。

所有与 CALayer 和 CABasicAnimation 对象相关的代码都已被删除,因为它无法正常工作。

不是最优雅的解决方案,我不认为它至少是可行的。我希望有人觉得这很有用。

此代码也适用于 iOS 设备(使用 iPad 3 测试)

旁注:第一篇文章中的 DebugLog 函数只是对打印日志消息的函数的回调,如果需要,可以将它们替换为 NSLog() 调用。

【讨论】:

以上是关于使用 AVFoundation 混合图像和视频的主要内容,如果未能解决你的问题,请参考以下文章

如何使用 AVFoundation 为您的视频添加不同图像和不同 CMTimes 的水印

使用 OpenCV 和 AVFoundation 框架的 iPhone 实时图像处理?

如何使用 AVFoundation 从视频流中获取原始格式的图像缓冲区?

KVO 在尝试使用 AV Foundation 流式播放视频时遇到麻烦

使用 AVFoundation Swift 保存视频

AVFoundation学习笔记: 媒体的创建与编辑