AudioUnit“有时”不起作用。 “仅”发生在 6s 上(可能是 6s plus,但我没有测试过)

Posted

技术标签:

【中文标题】AudioUnit“有时”不起作用。 “仅”发生在 6s 上(可能是 6s plus,但我没有测试过)【英文标题】:AudioUnit "Sometime" doesn't work. "Only" happens on 6s (may be 6s plus, but I haven't tested) 【发布时间】:2015-12-07 04:51:49 【问题描述】:

我同时使用AudioUnit 进行播放和录制。首选设置是采样率 = 48kHz,缓冲持续时间 = 0.02

这里是播放和录制的渲染回调:

static OSStatus recordingCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) 

    iosAudioController *microphone = (__bridge IosAudioController *)inRefCon;

    // render audio into buffer
    OSStatus result = AudioUnitRender(microphone.audioUnit,
                                      ioActionFlags,
                                      inTimeStamp,
                                      inBusNumber,
                                      inNumberFrames,
                                      microphone.tempBuffer);
    checkStatus(result);
//    kAudioUnitErr_InvalidPropertyValue

    // notify delegate of new buffer list to process
    if ([microphone.dataSource respondsToSelector:@selector(microphone:hasBufferList:withBufferSize:withNumberOfChannels:)])
    
        [microphone.dataSource microphone:microphone
                          hasBufferList:microphone.tempBuffer
                         withBufferSize:inNumberFrames
                   withNumberOfChannels:microphone.destinationFormat.mChannelsPerFrame];
    

    return result;


/**
 This callback is called when the audioUnit needs new data to play through the
 speakers. If you don't have any, just don't write anything in the buffers
 */
static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                                 const AudioTimeStamp *inTimeStamp, 
                                 UInt32 inBusNumber, 
                                 UInt32 inNumberFrames, 
                                 AudioBufferList *ioData)     

    IosAudioController *output = (__bridge IosAudioController *)inRefCon;

    //
    // Try to ask the data source for audio data to fill out the output's
    // buffer list
    //
    if( [output.dataSource respondsToSelector:@selector(outputShouldUseCircularBuffer:)] )

        TPCircularBuffer *circularBuffer = [output.dataSource outputShouldUseCircularBuffer:output];
        if( !circularBuffer )
            //            SInt32 *left  = ioData->mBuffers[0].mData;
            //            SInt32 *right = ioData->mBuffers[1].mData;
            //            for(int i = 0; i < inNumberFrames; i++ )
            //                left[  i ] = 0.0f;
            //                right[ i ] = 0.0f;
            //            
            *ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
            return noErr;
        ;

        /**
         Thank you Michael Tyson (A Tasty Pixel) for writing the TPCircularBuffer, you are amazing!
         */

        // Get the available bytes in the circular buffer
        int32_t availableBytes;
        void *buffer = TPCircularBufferTail(circularBuffer,&availableBytes);
        int32_t amount = 0;
        //        float floatNumber = availableBytes * 0.25 / 48;
        //        float speakerNumber = ioData->mBuffers[0].mDataByteSize * 0.25 / 48;

        for (int i=0; i < ioData->mNumberBuffers; i++) 
            AudioBuffer abuffer = ioData->mBuffers[i];

            // Ideally we'd have all the bytes to be copied, but compare it against the available bytes (get min)
            amount = MIN(abuffer.mDataByteSize,availableBytes);

            // copy buffer to audio buffer which gets played after function return
            memcpy(abuffer.mData, buffer, amount);

            // set data size
            abuffer.mDataByteSize = amount;
        
        // Consume those bytes ( this will internally push the head of the circular buffer )
        TPCircularBufferConsume(circularBuffer,amount);
    
    else
    
        //
        // Silence if there is nothing to output
        //
        *ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
    
    return noErr;

_tempBuffer 配置有4096 帧。

这是我解除分配音频单元的方法。 注意,由于VoiceProcessingIO单元可能无法正常工作,如果你启动,停止并重新启动它,我需要处理并每次初始化它。这是一个已知问题并发布在此处,但我不记得链接了。

if (_tempBuffer != NULL) 
    for(unsigned i = 0; i < _tempBuffer->mNumberBuffers; i++)
    
        free(_tempBuffer->mBuffers[i].mData);
    
    free(_tempBuffer);

AudioComponentInstanceDispose(_audioUnit);

此配置适用于 6、6+ 和更早的设备。但是 6s 出了点问题(可能是 6s+)。有时,(那种错误真的很烦人。我讨厌它。对我来说,它在 20 次测试中发生了 6-7 次),仍然有来自IOUnit 的传入和传出数据,但根本没有声音。 似乎在第一次测试时从未发生过,所以我猜这可能是IOUnit 的内存问题,我仍然不知道如何解决。

任何建议将不胜感激。

更新

我忘了展示我如何配置AudioUnit

// Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(_audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Input,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Enable IO for playback
    status = AudioUnitSetProperty(_audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  kOutputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Apply format
    status = AudioUnitSetProperty(_audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &_destinationFormat,
                                  sizeof(self.destinationFormat));
    checkStatus(status);
    status = AudioUnitSetProperty(_audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  kOutputBus,
                                  &_destinationFormat,
                                  sizeof(self.destinationFormat));
    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
    status = AudioUnitSetProperty(_audioUnit,
                                  kAudioOutputUnitProperty_SetInputCallback,
                                  kAudioUnitScope_Global,
                                  kInputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
    status = AudioUnitSetProperty(_audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct, 
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
    flag = 0;
    status = AudioUnitSetProperty(_audioUnit,
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output, 
                                  kInputBus,
                                  &flag, 
                                  sizeof(flag));

    [self configureMicrophoneBufferList];

    // Initialise
    status = AudioUnitInitialize(_audioUnit);

【问题讨论】:

【参考方案1】:

3 件事可能是问题:

对于静音(下溢期间),您可能想尝试用零的 inNumberFrames 填充缓冲区,而不是原封不动地返回它们。

在音频回调中,Apple DTS 不建议使用 anyObjective C 消息传递(您的 respondsToSelector: call )。

在音频处理真正停止之前,您不应释放缓冲区或调用 AudioComponentInstanceDispose。而且由于音频单元在另一个实时线程中运行,它们不会(获取线程或 CPU 时间并且)真正停止,直到您的应用程序发出停止音频调用后的一段时间。我会等待几秒钟,当然不会在延迟时间之后调用(重新)初始化或(重新)启动。

【讨论】:

感谢您的回复。我将消除 objc 消息传递。如果没有音频,OSStatus 结果 = AudioUnitRender();用 NaN 值返回我的缓冲区。你有什么想法吗?

以上是关于AudioUnit“有时”不起作用。 “仅”发生在 6s 上(可能是 6s plus,但我没有测试过)的主要内容,如果未能解决你的问题,请参考以下文章

PDO::lastInsertID 有时不起作用

EmailJS:仅有时有效

Xcode 编辑器复制命令 (command + c) 有时不起作用? [复制]

Eclipse 自动完成功能不起作用...仅适用于 AWT? [复制]

jQuery-UI:即使鼠标在放置区域内,拖放有时也不起作用

iOS 小部件在少数 iPhone5S 上不起作用