iOS:音频单元 RemoteIO 在 iPhone 上不起作用

Posted

技术标签:

【中文标题】iOS:音频单元 RemoteIO 在 iPhone 上不起作用【英文标题】:iOS: Audio Unit RemoteIO not working on iPhone 【发布时间】:2012-10-26 11:47:39 【问题描述】:

我正在尝试根据麦克风的输入创建自己的自定义音效音频单元。此应用程序允许从麦克风到扬声器的同时输入/输出。我可以使用模拟器应用效果和工作,但是当我尝试在 iPhone 上进行测试时,我什么也听不见。如果有人可以帮助我,请粘贴我的代码:

  - (id) init
    self = [super init];

    OSStatus status;

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Input,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  kOutputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Describe format
    AudiostreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = 44100.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;


    // Apply format
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  kOutputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_SetInputCallback,
                                  kAudioUnitScope_Global,
                                  kInputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);


    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;

当来自麦克风的新音频数据可用时调用此回调。但是我在 iPhone 上测试时千万不要进入这里:

static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) 
    AudioBuffer buffer;

    buffer.mNumberChannels = 1;
    buffer.mDataByteSize = inNumberFrames * 2;
    buffer.mData = malloc( inNumberFrames * 2 );

    // Put buffer in a AudioBufferList
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    // Then:
    // Obtain recorded samples

    OSStatus status;

    status = AudioUnitRender([iosAudio audioUnit],
                             ioActionFlags,
                             inTimeStamp,
                             inBusNumber,
                             inNumberFrames,
                             &bufferList);
    checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    // Process the new data
    [iosAudio processAudio:&bufferList];

    // release the malloc'ed data in the buffer we created earlier
    free(bufferList.mBuffers[0].mData);

    return noErr;

【问题讨论】:

【参考方案1】:

我解决了我的问题。我只需要在播放/录音之前初始化 AudioSession。我使用以下代码这样做:

OSStatus status;

AudioSessionInitialize(NULL, NULL, NULL, self);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty (kAudioSessionProperty_AudioCategory,
                               sizeof (sessionCategory),
                               &sessionCategory);

if (status != kAudioSessionNoError)

    if (status == kAudioServicesUnsupportedPropertyError) 
        NSLog(@"AudioSessionInitialize failed: unsupportedPropertyError");
    else if (status == kAudioServicesBadPropertySizeError) 
        NSLog(@"AudioSessionInitialize failed: badPropertySizeError");
    else if (status == kAudioServicesBadSpecifierSizeError) 
        NSLog(@"AudioSessionInitialize failed: badSpecifierSizeError");
    else if (status == kAudioServicesSystemSoundUnspecifiedError) 
        NSLog(@"AudioSessionInitialize failed: systemSoundUnspecifiedError");
    else if (status == kAudioServicesSystemSoundClientTimedOutError) 
        NSLog(@"AudioSessionInitialize failed: systemSoundClientTimedOutError");
    else 
        NSLog(@"AudioSessionInitialize failed! %ld", status);
    



AudioSessionSetActive(TRUE);

...

【讨论】:

能分享一下这个功能相关的完整代码吗?

以上是关于iOS:音频单元 RemoteIO 在 iPhone 上不起作用的主要内容,如果未能解决你的问题,请参考以下文章

为啥此音频单元 RemoteIO 初始化在 iPhone 上有效,但在模拟器中无效?

如何在 iOS 的 AUGraph 中添加两个 I/O 音频单元?

是否可以列出当前在应用程序中的所有 RemoteIO 音频单元?

设置音频单元的音量 (kAudioUnitSubType_RemoteIO)

iOS 6 上的 RemoteIO 和录制 AAC

如何在 iPhone 的 Core Audio (Audio Unit/ Remote IO) 中将录制的声音更改为男人的声音