使用 aurioTouch 录制音频单元文件 - AudioStreamBasicDescription 配置问题?

Posted

技术标签:

【中文标题】使用 aurioTouch 录制音频单元文件 - AudioStreamBasicDescription 配置问题?【英文标题】:Audio Unit file recording with aurioTouch - AudioStreamBasicDescription configuration issue? 【发布时间】:2015-01-13 06:46:32 【问题描述】:

我已经开始学习使用 aurioTouch 的音频单元。经过几天的音频单元学习,我仍然感到有些失落,我觉得我错过了一些非常明显的东西。

完整源码可以在:http://pastebin.com/LXLYDEhy查看

这里还列出了部分来源

在我的 performRender 回调中,我已将代码更改为

static OSStatus performRender (void                         *inRefCon,
                           AudioUnitRenderActionFlags   *ioActionFlags,
                           const AudioTimeStamp         *inTimeStamp,
                           UInt32                       inBusNumber,
                           UInt32                       inNumberFrames,
                           AudioBufferList              *ioData) 

OSStatus err = noErr;
AudioController *audioController = (__bridge AudioController *)inRefCon;  
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mData = NULL;
OSStatus status;
status = AudioUnitRender(cd.rioUnit,
                         ioActionFlags,
                         inTimeStamp,
                         inBusNumber,
                         inNumberFrames,
                         &bufferList); // bufferList.mBuffers[0].mData is null
status = ExtAudioFileWriteAsync(audioController.extAudioFileRef, bufferList.mNumberBuffers, &bufferList);

音频单元是这样设置的

- (AudiostreamBasicDescription)getAudioDescription 
AudioStreamBasicDescription audioDescription = 0;
audioDescription.mFormatID          = kAudioFormatLinearPCM;
audioDescription.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
audioDescription.mChannelsPerFrame  = 1;
audioDescription.mBytesPerPacket    = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mFramesPerPacket   = 1;
audioDescription.mBytesPerFrame     = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mBitsPerChannel    = 8 * sizeof(SInt16);
audioDescription.mSampleRate        = 44100.0;
return audioDescription;



- (void)setupIOUnit

try 
    // Create a new instance of AURemoteIO

    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;

    AudioComponent comp = AudioComponentFindNext(NULL, &desc);
    XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of AURemoteIO");

    //  Enable input and output on AURemoteIO
    //  Input is enabled on the input scope of the input element
    //  Output is enabled on the output scope of the output element

    UInt32 one = 1;
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on AURemoteIO");
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on AURemoteIO");

    // Explicitly set the input and output client formats
    // sample rate = 44100, num channels = 1, format = 32 bit floating point

    CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false);
//        AudioStreamBasicDescription audioFormat = [self getAudioDescription];
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on AURemoteIO");
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on AURemoteIO");

    // Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
    // of samples it will be asked to produce on any single given call to AudioUnitRender
    UInt32 maxFramesPerSlice = 4096;
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on AURemoteIO");

    // Get the property value back from AURemoteIO. We are going to use this value to allocate buffers accordingly
    UInt32 propSize = sizeof(UInt32);
    XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on AURemoteIO");

    _bufferManager = new BufferManager(maxFramesPerSlice);
    _dcRejectionFilter = new DCRejectionFilter;

    // We need references to certain data in the render callback
    // This simple struct is used to hold that information

    cd.rioUnit = _rioUnit;
    cd.bufferManager = _bufferManager;
    cd.dcRejectionFilter = _dcRejectionFilter;
    cd.muteAudio = &_muteAudio;
    cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed;

    AURenderCallbackStruct renderCallback;
    renderCallback.inputProc = performRender;
    renderCallback.inputProcRefCon = self;

    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on AURemoteIO");


    // Initialize the AURemoteIO instance
    XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize AURemoteIO instance");


catch (CAXException &e) 
    NSLog(@"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation);

catch (...) 
    NSLog(@"Unknown error returned from setupIOUnit");


return;

完整源码可以在http://pastebin.com/LXLYDEhy查看

【问题讨论】:

问题/问题是什么?您给了我们代码,但没有说明您遇到了什么错误 感谢 jn_pdx。问题是这会生成仅包含标题但没有音频数据的文件。这些文件每个大约 58 个字节。但是对于每个操作,我都会收到所有有效的 OSStatus 0。 【参考方案1】:

您的代码通常看起来不错,但至少有一个重要问题:您没有为要复制到缓冲区的数据分配空间,而是将它们显式设置为NULL。相反,您应该分配空间,然后将其复制到 AudioUnitRender

示例代码:

AudioBufferList *bufferList;
bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = 1024 * 4;
bufferList->mBuffers[0].mData = calloc(1024, 4);

(请注意,您可能需要调整分配大小以适合您的流类型、大小等 - 以上只是示例代码,但它解决了您的主要问题。

【讨论】:

您好 jn_pdx。那没有用,但我已经放弃了使用它的想法。相反,我试图解决的真正问题是在这个线程中。我快到了,但它还没有工作。你觉得你能帮上忙吗? ***.com/questions/27893428/…

以上是关于使用 aurioTouch 录制音频单元文件 - AudioStreamBasicDescription 配置问题?的主要内容,如果未能解决你的问题,请参考以下文章

使用音频单元录制扬声器输出

音频单元录制:增加缓冲区的音量

如何使用robolectric对Android音频录制应用进行单元测试

使用 Pyaudio 在 Python 中录制音频,错误 ||PaMacCore (AUHAL)|| ... msg=音频单元:在当前上下文中无法执行

我可以使用 AVAudioEngine 从文件中读取,使用音频单元处理并写入文件,比实时更快吗?

仅从 iphone 扬声器(不包括麦克风)录制音频输出