AVFoundation 框架初探究
Posted 蜗牛的脚印
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了AVFoundation 框架初探究相关的知识,希望对你有一定的参考价值。
接着第一篇总结
系列第一篇地址:AVFoundation 框架初探究(一)
在第一篇的文章中,我们总结了主要有下面几个点的知识:
1、对AVFoundation框架整体的一个认识
2、AVSpeechSynthesizer这个文字转音频类
3、AVAudioPlayer音频播放类
4、AVAudioRecorder音频录制类
5、AVAudiosession音频会话处理类
上面第一篇说的内容,大致都是关于上面总结的,接着说说我们这第二篇总结什么?其实刚开始的时候,我是想按照《AVFoundation开发秘籍》的内容总结的,但我又觉得上面第一篇写的内容大致其实都是音频的,那我们这第二篇是不是总结视频的内容会更好一点,多媒体的处理,最主要的也就是音频和视频了,在接触了第一篇的音频之后,趁热打铁在把视频的总结出来,这样就大致上让我们认识了一下这个AVFoundation,所有这篇文章就决定不再按照书中的知识点去总结,直接总结视频的内容,当然这并不是说说中关于其他的讨论我们就不总结了,既然是系列的文章,按我们在说完视频之后再接着回来总结书中的知识。
本文 Demo地址
视频的播放
在这个系列最开始的时候我们有总结过视频播放的几个方式,所以关于 AVPlayerItem、AVPlayerLayer、AVPlayer 这几个播放类相关的定义、使用等等的我们就不再说了, 有需要的可以看看我们前面总结的文章 :
上面写的也只是最基础的视频的播放功能,在后面涉及到其他功能的时候我们再仔细的总结,说说今天我们针对视频这一块要总结的重点内容,视频的录制。
视频录制 AVCaptureSession + AVCaptureMovieFileOutput
我们先把利用AVCaptureSession + AVCaptureMovieFileOutput录制视频的整个流程整理出来,然后我们对照着整个流程,总结这整个流程当中的点点滴滴:
1、初始化 AVCaptureSession 得到一个捕捉会话对象。
2、通过 AVCaptureDevice 的类方法 defaultDeviceWithMediaType 区别 MediaType 得到 AVCaptureDevice 对象。
3、得到上面的 AVCaptureDevice 对象之后,就是我们的 AVCaptureDeviceInput 输入对象了。把我们的输入对象添加到 AVCaptureSession ,当然这里输入对象是要区分音频和视频对象的,这个具体的代码里面我们说。
4、有了输入当然也就有 AVCaptureMovieFileOutput,把它添加给AVCaptureSession对象。
5、通过我们初始化的AVCaptureMovieFileOutput的connectionWithMediaType方法得到一个AVCaptureConnection对象,ACCaptureConnection可以控制input到output的数据传输也可以设置视频录制的一些属性。
6、也是通过前面得到的AVCaptureSession对象初始化得到一个AVCaptureVideoPreviewLayer对象,用来预览我们要录制的视频画面,注意这个时候我们的视频录制还没有开始。
7、现在看看AVCaptureSession对象,你就发现输入输出以及Connection还有预览层都有了,那就让它 startRunning。
8、好了,用我们的AVCaptureMovieFileOutput 的 startRecordingToOutputFileURL 开始录制吧。
9、录制到满足你的需求时候记得让你startRunning的AVCaptureSession 通过 stopRunning休息了,让你的AVCaptureMovieFileOutput也可以stopRecording。这样整个过程就结束了!
上面的过程我们就把使用AVCaptureSession + AVCaptureMovieFileOutput录制视频的过程说的清楚了,有些细节我们也提过了,我们看看下面我们的Demo效果,由于是在真机测试的就简单截两张图。具体的可以运行Demo看看:
录制 播放
(说点题外的,也是无意中发现用摄像头对着X的前置摄像头的时候真的看到有红点闪烁,这也就说网上说的住酒店的时候你可以用摄像头扫描黑暗的房间可以看到有没有针孔摄像头是有道理的!^_^生活小常识,给经常出差住酒店的伙伴!)
通过上面的这两张效果图就大概的展示出了一个录制与播放的过程,下面就是我们的重点了,解读总结一下关于AVCaptureSession + AVCaptureMovieFileOutput的代码:
代码解读第一步:
self.captureSession = ({ // 分辨率设置 AVCaptureSession *session = [[AVCaptureSession alloc] init]; // 先判断这个设备是否支持设置你要设置的分辨率 if ([session canSetSessionPreset:AVCaptureSessionPresetMedium]) { /* 下面是对你能设置的预设图片的质量和分辨率的说明 AVCaptureSessionPresetHigh High 最高的录制质量,每台设备不同 AVCaptureSessionPresetMedium Medium 基于无线分享的,实际值可能会改变 AVCaptureSessionPresetLow LOW 基于3g分享的 AVCaptureSessionPreset640x480 640x480 VGA AVCaptureSessionPreset1280x720 1280x720 720p HD AVCaptureSessionPresetPhoto Photo 完整的照片分辨率,不支持视频输出 */ [session setSessionPreset:AVCaptureSessionPresetMedium]; } session; });
NOTE: 我在Demo中有写清楚为什么我们可以利用 self.captureSession =({ })的方式写,有兴趣的可以看看。我也是学习中看到才上网查为什么能这样写的,长见识!
解读代码第二、三步:
-(BOOL)SetSessioninputs:(NSError *)error{ // capture 捕捉 捕获 /* 视频输入类 AVCaptureDevice 捕获设备类 AVCaptureDeviceInput 捕获设备输入类 */ AVCaptureDevice * captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; AVCaptureDeviceInput * videoInput = [AVCaptureDeviceInput deviceInputWithDevice: captureDevice error: &error]; if (!videoInput) { return NO; } // 给捕获会话类添加输入捕获设备 if ([self.captureSession canAddInput:videoInput]) { [self.captureSession addInput:videoInput]; }else{ return NO; } /* 添加音频捕获设备 */ AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; if (!audioDevice) { return NO; } if ([self.captureSession canAddInput:audioInput]) { [self.captureSession addInput:audioInput]; } return YES; }
NOTE:这段代码需要注意的地方就是 captureSession addInput 的时候最好就是先利用 canAddInput 进行判断,看是否能添加,为了代码的健壮。我们接着看!
解读代码第四、五步:
// 初始化一个设备输出对象 self.captureMovieFileOutput = ({ //输出一个电影文件 /* a.AVCaptureMovieFileOutput 输出一个电影文件 b.AVCaptureVideoDataOutput 输出处理视频帧被捕获 c.AVCaptureAudioDataOutput 输出音频数据被捕获 d.AVCaptureStillImageOutput 捕获元数据 */ AVCaptureMovieFileOutput * output = [[AVCaptureMovieFileOutput alloc]init]; /* 一个ACCaptureConnection可以控制input到output的数据传输。 */ AVCaptureConnection * connection = [output connectionWithMediaType:AVMediaTypeVideo]; if ([connection isVideoMirroringSupported]) { /* 视频防抖 是在 iOS 6 和 iPhone 4S 发布时引入的功能。到了 iPhone 6,增加了更强劲和流畅的防抖模式,被称为影院级的视频防抖动。相关的 API 也有所改动 (目前为止并没有在文档中反映出来,不过可以查看头文件)。防抖并不是在捕获设备上配置的,而是在 AVCaptureConnection 上设置。由于不是所有的设备格式都支持全部的防抖模式,所以在实际应用中应事先确认具体的防抖模式是否支持: typedef NS_ENUM(NSInteger, AVCaptureVideoStabilizationMode) { AVCaptureVideoStabilizationModeOff = 0, AVCaptureVideoStabilizationModeStandard = 1, AVCaptureVideoStabilizationModeCinematic = 2, AVCaptureVideoStabilizationModeAuto = -1, 自动 } NS_AVAILABLE_IOS(8_0) __TVOS_PROHIBITED; */ connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto; //预览图层和视频方向保持一致 connection.videoOrientation = [self.captureVideoPreviewLayer connection].videoOrientation; } if ([self.captureSession canAddOutput:output]) { [self.captureSession addOutput:output]; } output; });
NOTE: 前面我们也有说这个Connection,除了给输入和输出建立连接之外,还有一些录制属性是可以设置的,就像我们在代码中介绍的那样,具体的在代码注释中写的很详细,大家可以看代码。
解读代码第六步:
/* 用于展示录制的画面 */ self.captureVideoPreviewLayer = ({ AVCaptureVideoPreviewLayer * preViewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession]; preViewLayer.frame = CGRectMake(10, 50, 355, 355); /* AVLayerVideoGravityResizeAspect:保留长宽比,未填充部分会有黑边 AVLayerVideoGravityResizeAspectFill:保留长宽比,填充所有的区域 AVLayerVideoGravityResize:拉伸填满所有的空间 */ preViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [self.view.layer addSublayer:preViewLayer]; self.view.layer.masksToBounds = YES; preViewLayer; });
NOTE: 这里的AVCaptureVideoPreviewLayer对象利用 initWithSession: 初始化的时候这个session就是我们前面初始化的session。
解读代码...没了!剩下的开始和结束的就没有什么好说的了,还有一个点值得我们说说就是: AVCaptureFileOutputRecordingDelegate 你看它的名字就知道是什么了,它就是我们AVCaptureMovieFileOutput的代理,看看个代理里面的方法,首先这个代理是在我们的开始录制方法里面设置的:
- (void)startRecordingToOutputFileURL:(NSURL*)outputFileURL recordingDelegate:(id<AVCaptureFileOutputRecordingDelegate>)delegate
就是这个开始的方法,最后的AVCaptureFileOutputRecordingDelegate就是我们需要注意的代理,我们看这个代理里面的方法解释:
@protocol AVCaptureFileOutputRecordingDelegate <NSObject> @optional @method captureOutput:didStartRecordingToOutputFileAtURL:fromConnections: @abstract //开始往file里面写数据 Informs the delegate when the output has started writing to a file. @param captureOutput The capture file output that started writing the file. @param fileURL The file URL of the file that is being written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @discussion //方法在给输出文件当中写数据的时候开始调用 如果在开始写数据的时候有错误 方法就不会被调用 但 captureOutput:willFinishRecordingToOutputFileAtURL:fromConnections:error: and captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: //这两个方法总是会被调用,即使没有数据写入 This method is called when the file output has started writing data to a file. If an error condition prevents any data from being written, this method may not be called. captureOutput:willFinishRecordingToOutputFileAtURL:fromConnections:error: and captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: will always be called, even if no data is written. Clients 顾客;客户端;委托方 specific特殊 efficient 有效的 Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. 方法: - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections; @method captureOutput:didPauseRecordingToOutputFileAtURL:fromConnections: @abstract 摘要 没当客户端成功的暂停了录制时候这个方法就会被调用 Called whenever the output is recording to a file and successfully pauses the recording at the request of the client. @param captureOutput The capture file output that has paused its file recording. @param fileURL The file URL of the file that is being written. @param connections attached 附加 provided 提供 An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @discussion 下面的谈论告诉我们你要是调用了stop方法,这个代理方法是不会被调用的 Delegates can use this method to be informed when a request to pause recording is actually respected. It is safe for delegates to change what the file output is currently doing (starting a new file, for example) from within this method. If recording to a file is stopped, either manually or due to an error, this method is not guaranteed to be called, even if a previous call to pauseRecording was made. Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didPauseRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections NS_AVAILABLE(10_7, NA); @method captureOutput:didResumeRecordingToOutputFileAtURL:fromConnections: @abstract 这个摘要告诉我们的是这个方法在你暂停完之后成功的回复了录制就会进这个代理方法 Called whenever the output, at the request of the client, successfully resumes a file recording that was paused. @param captureOutput The capture file output that has resumed(重新开始) its paused file recording. @param fileURL The file URL of the file that is being written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @discussion Delegates can use this method to be informed(通知) when a request to resume recording is actually respected. It is safe for delegates to change what the file output is currently doing (starting a new file, for example) from within this method. If recording to a file is stopped, either manually or due to an error, this method is not guaranteed(确保、有保证) to be called, even if a previous call to resumeRecording was made. Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didResumeRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections NS_AVAILABLE(10_7, NA); @method captureOutput:willFinishRecordingToOutputFileAtURL:fromConnections:error: @abstract 这个方法在录制即将要结束的时候就会被调用 Informs the delegate when the output will stop writing new samples to a file. @param captureOutput The capture file output that will finish writing the file. @param fileURL The file URL of the file that is being written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @param error An error describing what caused the file to stop recording, or nil if there was no error. @discussion This method is called when the file output will stop recording new samples to the file at outputFileURL, either because startRecordingToOutputFileURL:recordingDelegate: or stopRecording were called, or because an error, described by the error parameter, occurred (if no error occurred, the error parameter will be nil). This method will always be called for each recording request, even if no data is successfully written to the file. Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput willFinishRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections error:(NSError *)error NS_AVAILABLE(10_7, NA); 下面是必须实现的代理方法,就是录制成功结束的时候调用的方法 @required @method captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: @abstract Informs the delegate when all pending data has been written to an output file. @param captureOutput The capture file output that has finished writing the file. @param fileURL The file URL of the file that has been written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that was written to the file. @param error An error describing what caused the file to stop recording, or nil if there was no error. @discussion This method is called when the file output has finished writing all data to a file whose recording was stopped, either because startRecordingToOutputFileURL:recordingDelegate: or stopRecording were called, or because an error, described by the error parameter, occurred (if no error occurred, the error parameter will be nil). This method will always be called for each recording request, even if no data is successfully written to the file. Clients should not assume that this method will be called on a specific thread. Delegates are required to implement this method. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error; @end
以上就是我们总结的关于AVCaptureSession + AVCaptureMovieFileOutput录制视频我们需要注意的一些地方,可能直接这样分开的看代码和文章感觉不太友好,可读性比较差,其实最好的就是跟着文章文字内容读,具体的代码看Demo。至于我们这里写的具体的代码内容,推荐还是看Demo会好一点。毕竟Demo里面全都有!
视频录制 AVCaptureSession + AVAssetWriter
上面说了AVCaptureSession + AVCaptureMovieFileOutput,现在说说我们的AVCaptureSession + AVAssetWriter,这个过程比起我们前面提到的是要复杂的,先来一个大概的概括,然后把它在解析一下:
1、建录制会话
2、设置视频的输入 和 输出
3、设置音频的输入 和 输出
4、添加视频预览层
5、开始采集数据,这个时候还没有写入数据,用户点击录制后就可以开始写入数据
6、初始化AVAssetWriter, 我们会拿到视频和音频的数据流,用AVAssetWriter写入文件,这一步需要我们自己实现。
整个大概的过程我们可以整理成这六点,看着好像比前面的要简单,其实比前面的是要复杂的。我们再仔细把这六步拆分一下,你就知道这六步涉及到的内容是要比前面的多一点的:
1、初始化需要的线程队列(这个后面你可以了解到为什么需要这些队列)
2、初始化AVCaptureSession录制会话
3、需要一个视频流的输入类: 利用AVCaptureDevice 录制设备类,根据 AVMediaType 初始化 AVCaptureDeviceInput 录制输入设备类,是要分音频和视频的,这点和前面的类似。把他们添加到录制会话里面。
4、初始化 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput ,把它们添加到AVCaptureSession对象,根据你初始化的线程设置setSampleBufferDelegate代理对象
5、根据AVCaptureSession得到一个AVCaptureVideoPreviewLayer预览曾对象,用于预览你的拍摄画面
6、初始化AVAssetWriter 再给AVAssetWriter 通过addInpu t添加AVAssetWriterInput,AVAssetWriterInput也是根据AVMediaType分为video和audio,这个是重点!!!有许多参数需要设置!
7、通过 AVCaptureSession startRunning 开始采集数据,采集到的数据就会走你设置的输出对象 AVCaptureAudioDataOutput 的代理,代理会遵守AVCaptureVideoDataOutputSampleBufferDelegate 协议。你需要在这个协议的方法里面去开始通过 AVAssetWriter 对象 startWriting 开始写入数据
8、当写完数据之后就会走 AVAssetWriter的finishWritingWithCompletionHandler 方法,在这里你就可以拿到你录制的视频去做其他的处理了!
9、我们再Demo中使用了Photos框架,这个也是必要重新学习的。
我们和前面的一样,一步步的解析一下上面每一步的代码:
解读代码第一步:
#pragma mark -- #pragma mark -- initDispatchQueue -(void)initDispatchQueue{ // 视频队列 self.videoDataOutputQueue = dispatch_queue_create(CAPTURE_SESSION_QUEUE_VIDEO, DISPATCH_QUEUE_SERIAL); /*解释: 用到dispatch_set_target_queue是为了改变self.videoDataOutputQueue串行队列的优先级,要是我们不使用dispatch_set_target_queu
我们创建的队列执行的优先级都与默认优先级的Global Dispatch queue相同 */ dispatch_set_target_queue(self.videoDataOutputQueue, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)); // 音频队列 self.audioDataOutputQueue = dispatch_queue_create(CAPTURE_SESSION_QUEUE_AUDIO, DISPATCH_QUEUE_SERIAL); // WRITER队列 self.writingQueue = dispatch_queue_create(CAPTURE_SESSION_QUEUE_ASSET_WRITER, DISPATCH_QUEUE_SERIAL ); }
解读代码第二、三步:
#pragma mark -- #pragma mark -- 初始化AVCaptureDevice 以及 AVCaptureDeviceInput -(BOOL)SetSessioninputs:(NSError *)error{ // 具体的为什么能这样写,以及代码里面一些变量的具体的含义参考LittieVideoController self.captureSession = ({ AVCaptureSession * captureSession = [[AVCaptureSession alloc]init]; if ([captureSession canSetSessionPreset:AVCaptureSessionPresetMedium]) { [captureSession canSetSessionPreset:AVCaptureSessionPresetMedium]; } captureSession; }); // capture 捕捉 捕获 /* 视频输入类 AVCaptureDevice 捕获设备类 AVCaptureDeviceInput 捕获设备输入类 */ AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; AVCaptureDeviceInput * videoInput = [AVCaptureDeviceInput deviceInputWithDevice: videoDevice error: &error]; if (!videoInput) { return NO; } // 给捕获会话类添加输入捕获设备 if ([self.captureSession canAddInput:videoInput]) { [self.captureSession addInput:videoInput]; }else{ return NO; } /* 添加音频捕获设备 */ AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; if (!audioDevice) { return NO; } if ([self.captureSession canAddInput:audioInput]) { [self.captureSession addInput:audioInput]; } return YES; }
解读代码第四、五步:
#pragma mark -- #pragma mark -- 初始化AVCaptureSession 以及 AVCaptureVideoDataOutput AVCaptureAudioDataOutput -(void)captureSessionAddOutputSession{ // 视频videoDataOutput self.videoDataOutput = ({ AVCaptureVideoDataOutput * videoDataOutput = [[AVCaptureVideoDataOutput alloc]init]; videoDataOutput.videoSettings = nil; videoDataOutput.alwaysDiscardsLateVideoFrames = NO; videoDataOutput; }); // Sample样品 Buffer 缓冲 [self.videoDataOutput setSampleBufferDelegate:self queue:self.videoDataOutputQueue]; self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES; //立即丢弃旧帧,节省内存,默认YES if ([self.captureSession canAddOutput:self.videoDataOutput]) { [self.captureSession addOutput:self.videoDataOutput]; } // 音频audioDataOutput self.audioDataOutput = ({ AVCaptureAudioDataOutput * audioDataOutput = [[AVCaptureAudioDataOutput alloc]init]; audioDataOutput; }); [self.audioDataOutput setSampleBufferDelegate:self queue:self.audioDataOutputQueue]; if ([self.captureSession canAddOutput:self.audioDataOutput]) { [self.captureSession addOutput:self.audioDataOutput]; } /* 用于展示录制的画面 */ self.captureVideoPreviewLayer = ({ AVCaptureVideoPreviewLayer * preViewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession]; preViewLayer.frame = CGRectMake(10, 50, 355, 355); /* AVLayerVideoGravityResizeAspect:保留长宽比,未填充部分会有黑边 AVLayerVideoGravityResizeAspectFill:保留长宽比,填充所有的区域 AVLayerVideoGravityResize:拉伸填满所有的空间 */ preViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [self.view.layer addSublayer:preViewLayer]; self.view.layer.masksToBounds = YES; preViewLayer; }); }
NOTE: 注意这里的 setSampleBufferDelegate 这个方法,通过这个方法有两点你就理解了,一是为什么我们需要队列。二就是为什么我们处理采集到的视频、音频数据的时候是在这个 AVCaptureVideoDataOutputSampleBufferDelegate协议的方法里面。
解读代码第六步:(重点,要说的都在代码注释里面)
#pragma mark -- #pragma mark -- 初始化AVAssetWriterInput -(void)initAssetWriterInputAndOutput{ NSError * error; self.assetWriter = ({ AVAssetWriter * assetWrite = [[AVAssetWriter alloc]initWithURL:[NSURL fileURLWithPath:self.dataDirectory] fileType:AVFileTypeMPEG4 error:&error]; NSParameterAssert(assetWrite); assetWrite; }); //每像素比特 CGSize outputSize = CGSizeMake(355, 355); NSInteger numPixels = outputSize.width * outputSize.height; CGFloat bitsPerPixel = 6.0; NSInteger bitsPerSecond = numPixels * bitsPerPixel; // [NSNumber numberWithDouble:128.0*1024.0] /* AVVideoCompressionPropertiesKey 硬编码参数 AVVideoAverageBitRateKey 视频尺寸*比率 AVVideoMaxKeyFrameIntervalKey 关键帧最大间隔,1为每个都是关键帧,数值越大压缩率越高 AVVideoExpectedSourceFrameRateKey 帧率 */ NSDictionary * videoCpmpressionDic = @{AVVideoAverageBitRateKey:@(bitsPerSecond), AVVideoExpectedSourceFrameRateKey:@(30), AVVideoMaxKeyFrameIntervalKey : @(30), AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel }; /* AVVideoScalingModeKey 填充模式 Scaling 缩放 AVVideoCodecKey 编码格式 */ NSDictionary * videoCompressionSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill, AVVideoWidthKey : @(outputSize.height), AVVideoHeightKey : @(outputSize.width), AVVideoCompressionPropertiesKey : videoCpmpressionDic }; //Compression 压缩 if ([self.assetWriter canApplyOutputSettings:videoCompressionSettings forMediaType:AVMediaTypeVideo]) { self.videoWriterInput = ({ AVAssetWriterInput * input = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings]; NSParameterAssert(input); //expectsMediaDataInRealTime 必须设为yes,需要从capture session 实时获取数据 input.expectsMediaDataInRealTime = YES; input.transform = CGAffineTransformMakeRotation(M_PI / 2.0); input; }); if ([self.assetWriter canAddInput:self.videoWriterInput]) { [self.assetWriter addInput:self.videoWriterInput]; } } // 下面这些属性设置会影响到语音是否能被正常的录入 // Channel 频道 // AudioChannelLayout acl; // void * memset(void *s,int c,size_t n)总的作用:将已开辟内存空间 s 的首 n 个字节的值设为值 c // bzero() 会将内存块(字符串)的前n个字节清零,其原型为:void bzero(void *s, int n) //bzero(&acl, sizeof(acl)); //AVChannelLayoutKey:[NSData dataWithBytes: &acl length: sizeof(acl)], /* AVAudioRecorder 录音类这个后面说 可以设置的一些属性 : <1>AVNumberOfChannelsKey 通道数 <2>AVSampleRateKey 采样率 一般用44100 <3>AVLinearPCMBitDepthKey 比特率 一般设16 32 <4>AVEncoderAudioQualityKey 质量 <5>AVEncoderBitRateKey 比特采样率 一般是128000 <6>AVChannelLayoutKey 通道布局值是一个包含AudioChannelLayout的NSData对象 */ NSDictionary * audioSettings = @{ AVFormatIDKey:@(kAudioFormatMPEG4AAC) , AVEncoderBitRatePerChannelKey:@(64000), AVSampleRateKey:@(44100.0), AVNumberOfChannelsKey:@(1)}; if ([self.assetWriter canApplyOutputSettings:audioSettings forMediaType:AVMediaTypeAudio]) { self.audioWriterInput = ({ AVAssetWriterInput * input = [[AVAssetWriterInput alloc]initWithMediaType:AVMediaTypeAudio outputSettings:audioSettings]; //Parameter 参数 系数 参量 //NSParameterAssert注意条件书写不支持逻辑或语法 /* 注意它和NSAssert的区别 在NSAssert中你是可以写逻辑判断的语句的。比如: NSAssert(count>10, @"总数必须大于10"); 这条语句中要是count<=10 就会报错 */ NSParameterAssert(input); input.expectsMediaDataInRealTime = YES; input; }); if ([self.assetWriter canAddInput:self.audioWriterInput]) { [self.assetWriter addInput:self.audioWriterInput]; } } self.writeState = FMRecordStateRecording; }
后面的开始和结束的部分我们就不在说了,重点还是!!! 看Demo,因为这些注释Demo里面全都有,边看代码看注释应该效果会更好,比这样直白的看着文章效果肯定要好!最后我们比较一下上面的两种录制方式:
AVCaptureMovieFileOutput 和 AVAssetWriter 方式比较
相同点:数据采集都在AVCaptureSession中进行,视频和音频的输入都一样,画面的预览一致。
不同点:输出不一致
AVCaptureMovieFileOutput 只需要一个输出即可,指定一个文件路后,视频和音频会写入到指定路径,不需要其他复杂的操作。
AVAssetWriter 需要 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 两个单独的输出,拿到各自的输出数据后,然后自己进行相应的处理。可配参数不一致,AVAssetWriter可以配置更多的参数。
视频剪裁不一致,AVCaptureMovieFileOutput 如果要剪裁视频,因为系统已经把数据写到文件中了,我们需要从文件中独到一个完整的视频,然后处理;而AVAssetWriter我们拿到的是数据流,还没有合成视频,对数据流进行处理,所以两则剪裁方式也是不一样。
我们再说说第一种方式,在微信官方优化视频录制文章中有这样一段话:
“于是用AVCaptureMovieFileOutput(640*480)直接生成视频文件,拍视频很流畅。然而录制的6s视频大小有2M+,再用MMovieDecoder+MMovieWriter压缩至少要7~8s,影响聊天窗口发小视频的速度。”
这段话也反应出了第一种方式的缺点!然后在我看这类资料的时候,又看到这样一段话:
“如果你想要对影音输出有更多的操作,你可以使用 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 而不是我们上节讨论的 AVCaptureMovieFileOutput。 这些输出将会各自捕获视频和音频的样本缓存,接着发送到它们的代理。代理要么对采样缓冲进行处理 (比如给视频加滤镜),要么保持原样传送。使用 AVAssetWriter 对象可以将样本缓存写入文件
”
这样就把这两种之间的优劣进行了一个比较,希望看到这文章的每一个同行都能有收获吧。
我的博客即将同步至腾讯云+社区,邀请大家一同入驻。
最后:Demo地址
以上是关于AVFoundation 框架初探究的主要内容,如果未能解决你的问题,请参考以下文章
Combine框架中两个相近操作符scan和reduce探究
Combine框架中两个相近操作符scan和reduce探究