AVAssetWritter 不适用于音频

Posted

技术标签:

【中文标题】AVAssetWritter 不适用于音频【英文标题】:AVAssetWritter does not work with audio 【发布时间】:2011-03-04 22:41:23 【问题描述】:

我正在尝试让音频与 ios 应用程序的视频一起使用。视频很好。文件中没有录制任何音频(我的 iPhone 扬声器工作正常。)

这是初始化设置:

session = [[AVCaptureSession alloc] init];
    menu->session = session;
    menu_open = NO;
    session.sessionPreset = AVCaptureSessionPresetMedium;
    camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    microphone = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    menu->camera = camera;
    [session beginConfiguration];
    [camera lockForConfiguration:nil];
    if([camera isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure])
        camera.exposureMode = AVCaptureExposureModeContinuousAutoExposure;
    
    if([camera isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
        camera.focusMode = AVCaptureFocusModeContinuousAutoFocus;
    
    if([camera isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance])
        camera.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
    
    if ([camera hasTorch]) 
        if([camera isTorchModeSupported:AVCaptureTorchModeOn])
            [camera setTorchMode:AVCaptureTorchModeOn];
        
    
    [camera unlockForConfiguration];
    [session commitConfiguration];
    AVCaptureDeviceInput * camera_input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:nil];
    [session addInput:camera_input];
    microphone_input = [[AVCaptureDeviceInput deviceInputWithDevice:microphone error:nil] retain];
    AVCaptureVideoDataOutput * output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
    output.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
    [session addOutput:output];
    output.minFrameDuration = CMTimeMake(1,30);
    dispatch_queue_t queue = dispatch_queue_create("MY QUEUE", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    audio_output = [[[AVCaptureAudioDataOutput alloc] init] retain];
    queue = dispatch_queue_create("MY QUEUE", NULL);
    AudioOutputBufferDelegate * special_delegate = [[[AudioOutputBufferDelegate alloc] init] autorelease];
    special_delegate->normal_delegate = self;
    [special_delegate retain];
    [audio_output setSampleBufferDelegate:special_delegate queue:queue];
    dispatch_release(queue);
    [session startRunning];

这是录音的开始和结束:

if (recording)  //Hence stop recording
    [video_button setTitle:@"Video" forState: UIControlStateNormal];
    recording = NO;
    [writer_input markAsFinished];
    [audio_writer_input markAsFinished];
    [video_writer endSessionAtSourceTime: CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate: start_time],30)];
    [video_writer finishWriting];
    UISaveVideoAtPathToSavedPhotosAlbum(temp_url,self,@selector(video:didFinishSavingWithError:contextInfo:),nil);
    [start_time release];
    [temp_url release];
    [av_adaptor release];
    [microphone lockForConfiguration:nil];
    [session beginConfiguration];
    [session removeInput:microphone_input];
    [session removeOutput:audio_output];
    [session commitConfiguration];
    [microphone unlockForConfiguration];
    [menu restateConfigiration];
    [vid_off play];
else //Start recording
    [vid_on play];
    [microphone lockForConfiguration:nil];
    [session beginConfiguration];
    [session addInput:microphone_input];
    [session addOutput:audio_output];
    [session commitConfiguration];
    [microphone unlockForConfiguration];
    [menu restateConfigiration];
    [video_button setTitle:@"Stop" forState: UIControlStateNormal];
    recording = YES;
    NSError *error = nil;
    NSFileManager * file_manager = [[NSFileManager alloc] init];
    temp_url = [[NSString alloc] initWithFormat:@"%@/%@", NSTemporaryDirectory(), @"temp.mp4"];
    [file_manager removeItemAtPath: temp_url error:NULL];
    [file_manager release];
    video_writer = [[AVAssetWriter alloc] initWithURL: [NSURL fileURLWithPath:temp_url] fileType: AVFileTypeMPEG4 error: &error];
    NSDictionary *video_settings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey,[NSNumber numberWithInt:360], AVVideoWidthKey,[NSNumber numberWithInt:480], AVVideoHeightKey,nil];
    writer_input = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:video_settings] retain];
    AudioChannelLayout acl;
    bzero( &acl, sizeof(acl));
    acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
    audio_writer_input = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings: [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,[NSNumber numberWithFloat: 44100.0], AVSampleRateKey,[NSNumber numberWithInt: 64000], AVEncoderBitRateKey,[NSData dataWithBytes: &acl length: sizeof(acl) ], AVChannelLayoutKey,nil]] retain];
    audio_writer_input.expectsMediaDataInRealTime = YES;
    av_adaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput: writer_input sourcePixelBufferAttributes:NULL] retain];
    [video_writer addInput:writer_input];
    [video_writer addInput: audio_writer_input];
    [video_writer startWriting];
    [video_writer startSessionAtSourceTime: CMTimeMake(0,1)];
    start_time = [[NSDate alloc] init];

这是音频的委托:

@implementation AudioOutputBufferDelegate
    -(void)captureOutput: (AVCaptureOutput *) captureOutput didOutputSampleBuffer: (CMSampleBufferRef) sampleBuffer fromConnection: (AVCaptureConnection *) conenction
        if (normal_delegate->recording) 
            CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer,CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate: normal_delegate->start_time],30));
            [normal_delegate->audio_writer_input appendSampleBuffer: sampleBuffer];
        
    
@end

视频方法无关紧要,因为它有效。 “restateConfigiration”只是整理会话配置,否则火炬熄灭等:

[session beginConfiguration];
    switch (quality) 
        case Low:
            session.sessionPreset = AVCaptureSessionPresetLow;
            break;
        case Medium:
            session.sessionPreset = AVCaptureSessionPreset640x480;
            break;
    
    [session commitConfiguration];
    [camera lockForConfiguration:nil];
    if([camera isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure])
        camera.exposureMode = AVCaptureExposureModeContinuousAutoExposure;
    
    if([camera isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
        camera.focusMode = AVCaptureFocusModeContinuousAutoFocus;
    
    if([camera isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance])
        camera.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
    
    if ([camera hasTorch]) 
        if (torch) 
            if([camera isTorchModeSupported:AVCaptureTorchModeOn])
                [camera setTorchMode:AVCaptureTorchModeOn];
            
        else
            if([camera isTorchModeSupported:AVCaptureTorchModeOff])
                [camera setTorchMode:AVCaptureTorchModeOff];
            
        
    
    [camera unlockForConfiguration];

感谢您的帮助。

【问题讨论】:

【参考方案1】:

AVAssetWriter and Audio

这可能与链接帖子中提到的问题相同。尝试注释掉这些行

[writer_input markAsFinished];
[audio_writer_input markAsFinished];
[video_writer endSessionAtSourceTime: CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate: start_time],30)];

编辑

我不知道您设置演示时间戳的方式是否一定是错误的。我处理这个问题的方法是使用一个在启动时设置为 0 的局部变量。然后当我的代表收到我的第一个数据包时:

if (_startTime.value == 0) 
    _startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

然后

[bufferWriter->writer startWriting];
[bufferWriter->writer startSessionAtSourceTime:_startTime];

您的代码看起来有效,因为您正在计算每个接收到的数据包的时间差。但是,AVFoundation 会为您计算这一点,并优化时间戳以放置在交错容器中。我不确定的另一件事是每个音频的 CMSampleBufferRef 包含超过 1 个数据缓冲区,其中每个数据缓冲区都有自己的 PTS。我不确定设置 PTS 是否会自动调整所有其他数据缓冲区。

我的代码与您的不同之处在于我对音频和视频都使用了一个调度队列。在我使用的回调中(删除了一些代码)。

switch (bufferWriter->writer.status) 
    case AVAssetWriterStatusUnknown:

        if (_startTime.value == 0) 
            _startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
        

        [bufferWriter->writer startWriting];
        [bufferWriter->writer startSessionAtSourceTime:_startTime];

        //Break if not ready, otherwise fall through.
        if (bufferWriter->writer.status != AVAssetWriterStatusWriting) 
            break ;
        

    case AVAssetWriterStatusWriting:
        if( captureOutput == self.captureManager.audioOutput) 
                if( !bufferWriter->audioIn.readyForMoreMediaData)  
                    break;
                

                @try 
                    if( ![bufferWriter->audioIn appendSampleBuffer:sampleBuffer] ) 
                        [self delegateMessage:@"Audio Writing Error" withType:ERROR];
                    
                
                @catch (NSException *e) 
                    NSLog(@"Audio Exception: %@", [e reason]);
                
        
        else if( captureOutput == self.captureManager.videoOutput ) 

            if( !bufferWriter->videoIn.readyForMoreMediaData)  
                break;; 
            

            @try 
                if (!frontCamera) 
                    if( ![bufferWriter->videoIn appendSampleBuffer:sampleBuffer] ) 
                        [self delegateMessage:@"Video Writing Error" withType:ERROR];
                    
                
                else 
                    CMTime pt = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

                    flipBuffer(sampleBuffer, pixelBuffer);

                    if( ![bufferWriter->adaptor appendPixelBuffer:pixelBuffer withPresentationTime:pt] ) 
                        [self delegateMessage:@"Video Writing Error" withType:ERROR];
                    
                

            
            @catch (NSException *e) 
                NSLog(@"Video Exception Exception: %@", [e reason]);
            
        

        break;
    case AVAssetWriterStatusCompleted:
        return;
    case AVAssetWriterStatusFailed: 
        [self delegateMessage:@"Critical Error Writing Queues" withType:ERROR];
        bufferWriter->writer_failed = YES ;
        _broadcastError = YES;
        [self stopCapture] ;
        return;
    case AVAssetWriterStatusCancelled:
        break;
    default:
        break;

【讨论】:

感谢您的回答。我删除了这些行,它就像以前一样工作。仍然没有音频。 乍一看还可以。我唯一不确定的是您如何处理样本缓冲区的 PTS。 我将假设您的音频代理正在被调用。我没有关注你是如何设置的。我将编辑我的帖子以了解我如何处理 PTS。 再次感谢您。稍后我将在您的代码的帮助下再次对此进行研究。干杯! 你是个天才!其作品!我很开心。谢谢你。询问您是否需要我提供该应用的促销代码。

以上是关于AVAssetWritter 不适用于音频的主要内容,如果未能解决你的问题,请参考以下文章

音频播放和录制不适用于 iOS 10

JavaScript 音频持续时间不适用于 ALAC 文件(HTML 音频持续时间属性)

SoX 不适用于 Opus 音频文件

ng-bind-html 元素不适用于音频标签

复制不适用于包含音频/视频文件的 CouchDB 数据库

html 音频不适用于 macbook 和 ios,但适用于 imac - 很奇怪