设置效果 AudioUnit

Posted

技术标签:

【中文标题】设置效果 AudioUnit【英文标题】:Setting up an effect AudioUnit 【发布时间】:2012-08-29 01:47:59 【问题描述】:

我正在尝试编写一个 ios 应用程序,该应用程序从麦克风捕获声音,将其通过高通滤波器,并对处理后的声音进行一些计算。基于 Stefan Popp 的 MicInput (http://www.stefanpopp.de/2011/capture-iphone-microphone/),我试图在 I/O 音频单元的输入和输出之间放置一个效果音频单元(更具体地说,一个高通滤波器效果单元)。在设置了上述 AU 后,当我在 I/O AU 的渲染回调中调用 AudioUnitRender(fxAudioUnit, ...) 时,它给了我一个 10877 错误 (kAudioUnitErr_InvalidElement)。

AudioProcessingWithAudioUnitAPI.h

//
//  AudioProcessingWithAudioUnitAPI.h
//

#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVAudioSession.h>

@interface AudioProcessingWithAudioUnitAPI : NSObject

@property (readonly) AudioBuffer                        audioBuffer;
@property (readonly) AudioComponentInstance             audioUnit;
@property (readonly) AudioComponentInstance             fxAudioUnit;

...

@end

AudioProcessingWithAudioUnitAPI.m

//
//  AudioProcessingWithAudioUnitAPI.m
//

#import "AudioProcessingWithAudioUnitAPI.h"

@implementation AudioProcessingWithAudioUnitAPI

@synthesize isPlaying = _isPlaying;
@synthesize outputLevelDisplay = _outputLevelDisplay;
@synthesize audioBuffer = _audioBuffer;
@synthesize audioUnit = _audioUnit;
@synthesize fxAudioUnit = _fxAudioUnit;

...

#pragma mark Recording callback

static OSStatus recordingCallback(void *inRefCon, 
                              AudioUnitRenderActionFlags *ioActionFlags, 
                              const AudioTimeStamp *inTimeStamp, 
                              UInt32 inBusNumber, 
                              UInt32 inNumberFrames, 
                              AudioBufferList *ioData) 

    // the data gets rendered here
    AudioBuffer buffer;

    // a variable where we check the status
    OSStatus status;

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;

    /**
     on this point we define the number of channels, which is mono
     for the iphone. the number of frames is usally 512 or 1024.
     */
    buffer.mDataByteSize = inNumberFrames * 2; // sample size
    buffer.mNumberChannels = 1; // one channel
    buffer.mData = malloc( inNumberFrames * 2 ); // buffer size

    // we put our buffer into a bufferlist array for rendering
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

在下一次AudioUnitRender 调用中,会引发 10887 错误:

    status = AudioUnitRender([audioProcessor fxAudioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[audioProcessor hasError:status:__FILE__:__LINE__];

...

// process the bufferlist in the audio processor
    [audioProcessor processBuffer:&bufferList];

    //do some further processing

    // clean up the buffer
    free(bufferList.mBuffers[0].mData);

    return noErr;


#pragma mark FX AudioUnit render callback
//This just asks for samples to the microphone (I/O AU render)
static OSStatus fxAudioUnitRenderCallback(void *inRefCon, 
                                      AudioUnitRenderActionFlags *ioActionFlags, 
                                      const AudioTimeStamp *inTimeStamp, 
                                      UInt32 inBusNumber, 
                                      UInt32 inNumberFrames, 
                                      AudioBufferList *ioData)

    OSStatus retorno;

    AudioProcessingWithAudioUnitAPI* audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*)inRefCon;

    retorno = AudioUnitRender([audioProcessor audioUnit],
                          ioActionFlags,
                          inTimeStamp,
                          inBusNumber,
                          inNumberFrames,
                          ioData);
    [audioProcessor hasError:retorno:__FILE__:__LINE__];

    return retorno;


#pragma mark Playback callback

static OSStatus playbackCallback(void *inRefCon, 
                             AudioUnitRenderActionFlags *ioActionFlags, 
                             const AudioTimeStamp *inTimeStamp, 
                             UInt32 inBusNumber, 
                             UInt32 inNumberFrames, 
                             AudioBufferList *ioData)     

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;

    // iterate over incoming stream and copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++)  
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size; 
    
    return noErr;


#pragma mark - objective-c class methods
-(AudioProcessingWithAudioUnitAPI*)init

    self = [super init];
    if (self) 
        self.isPlaying = NO;
        [self initializeAudio];
    
    return self;


-(void)initializeAudio
    
    OSStatus status;

    // We define the audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output; // we want to ouput
    desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
    desc.componentFlags = 0; // must be zero
    desc.componentFlagsMask = 0; // must be zero
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider

    // find the AU component by description
    AudioComponent component = AudioComponentFindNext(NULL, &desc);

    // create audio unit by component
    status = AudioComponentInstanceNew(component, &_audioUnit);

    [self hasError:status:__FILE__:__LINE__];

    // and now for the fx AudioUnit
        desc.componentType = kAudioUnitType_Effect;
    desc.componentSubType = kAudioUnitSubType_HighPassFilter;

    // find the AU component by description
    component = AudioComponentFindNext(NULL, &desc);

    // create audio unit by component
    status = AudioComponentInstanceNew(component, &_fxAudioUnit);

    [self hasError:status:__FILE__:__LINE__];

    // define that we want record io on the input bus
    AudioUnitElement inputElement = 1;
    AudioUnitElement outputElement = 0;

    UInt32 flag = 1;
    status = AudioUnitSetProperty(self.audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, // use io
                              kAudioUnitScope_Input, // scope to input
                              inputElement, // select input bus (1)
                              &flag, // set flag
                              sizeof(flag));
    [self hasError:status:__FILE__:__LINE__];

    UInt32 anotherFlag = 0;
    // disable output (I don't want to hear back from the device)
    status = AudioUnitSetProperty(self.audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, // use io
                              kAudioUnitScope_Output, // scope to output
                              outputElement, // select output bus (0)
                              &anotherFlag, // set flag
                              sizeof(flag));
    [self hasError:status:__FILE__:__LINE__];

    /* 
     We need to specify our format on which we want to work.
     We use Linear PCM cause its uncompressed and we work on raw data.
     for more informations check.

     We want 16 bits, 2 bytes per packet/frames at 44khz 
     */
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = SAMPLE_RATE;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16; //65536
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;

    // set the format on the output stream
    status = AudioUnitSetProperty(self.audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Output, 
                              inputElement, 
                              &audioFormat, 
                              sizeof(audioFormat));

    [self hasError:status:__FILE__:__LINE__];

    // set the format on the input stream
    status = AudioUnitSetProperty(self.audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Input, 
                              outputElement, 
                              &audioFormat, 
                              sizeof(audioFormat));
    [self hasError:status:__FILE__:__LINE__];

    /**
     We need to define a callback structure which holds
     a pointer to the recordingCallback and a reference to
     the audio processor object
     */
    AURenderCallbackStruct callbackStruct;

    // set recording callback
    callbackStruct.inputProc = recordingCallback; // recordingCallback pointer
    callbackStruct.inputProcRefCon = (__bridge void*)self;

    // set input callback to recording callback on the input bus
    status = AudioUnitSetProperty(self.audioUnit, 
                              kAudioOutputUnitProperty_SetInputCallback, 
                              kAudioUnitScope_Global, 
                              inputElement, 
                              &callbackStruct, 
                              sizeof(callbackStruct));

    [self hasError:status:__FILE__:__LINE__];

    /*
     We do the same on the output stream to hear what is coming
     from the input stream
     */
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = (__bridge void*)self;

    // set playbackCallback as callback on our renderer for the output bus
    status = AudioUnitSetProperty(self.audioUnit, 
                              kAudioUnitProperty_SetRenderCallback, 
                              kAudioUnitScope_Global, 
                              outputElement,
                              &callbackStruct,
                              sizeof(callbackStruct));
    [self hasError:status:__FILE__:__LINE__];



    callbackStruct.inputProc = fxAudioUnitRenderCallback;
    callbackStruct.inputProcRefCon = (__bridge void*)self;

    // set input callback to input AU
    status = AudioUnitSetProperty(self.fxAudioUnit, 
                              kAudioUnitProperty_SetRenderCallback, 
                              kAudioUnitScope_Global, 
                              0, 
                              &callbackStruct, 
                              sizeof(callbackStruct));

    [self hasError:status:__FILE__:__LINE__];

    // reset flag to 0
    flag = 0;

    /*
     we need to tell the audio unit to allocate the render buffer,
     that we can directly write into it.
     */
    status = AudioUnitSetProperty(self.audioUnit, 
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                              kAudioUnitScope_Output, 
                              inputElement,
                              &flag, 
                              sizeof(flag));

    status = AudioUnitSetProperty(self.fxAudioUnit, 
                              kAudioUnitProperty_ShouldAllocateBuffer,
                              kAudioUnitScope_Output, 
                              0,
                              &flag, 
                              sizeof(flag));

    /*
     we set the number of channels to mono and allocate our block size to
     1024 bytes.
     */
    _audioBuffer.mNumberChannels = 1;
    _audioBuffer.mDataByteSize = 512 * 2;
    _audioBuffer.mData = malloc( 512 * 2 );

    // Initialize the Audio Unit and cross fingers =)
    status = AudioUnitInitialize(self.fxAudioUnit);
    [self hasError:status:__FILE__:__LINE__];

    status = AudioUnitInitialize(self.audioUnit);
    [self hasError:status:__FILE__:__LINE__];

    NSLog(@"Started");



//For now, this just copies the buffer to self.audioBuffer
-(void)processBuffer: (AudioBufferList*) audioBufferList

    AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];

    // we check here if the input data byte size has changed
    if (_audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) 
        // clear old buffer
        free(self.audioBuffer.mData);

        // assing new byte size and allocate them on mData
        _audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        _audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);


// copy incoming audio data to the audio buffer
memcpy(self.audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);


#pragma mark - Error handling

-(void)hasError:(int)statusCode:(char*)file:(int)line 

    if (statusCode) 
        printf("Error Code responded %d in file %s on line %d\n", statusCode, file, line);
        exit(-1);
    


@end

任何帮助将不胜感激。

【问题讨论】:

旁注:您不应该在实时音频上下文中执行 objc 消息传递(您的渲染回调) 【参考方案1】:

这类问题出现的比较频繁,所以我曾经写过mini-tutorial on this subject。然而,本指南确实是解决问题的具体方法,我现在觉得更优雅的方法是使用Novocaine framework,这让 iOS 上的 AudioUnit 设置变得非常头疼。

【讨论】:

【参考方案2】:

我找到了一个演示代码,可能有用 4 U;

演示网址:https://github.com/JNYJdev/AudioUnit

博客:http://atastypixel.com/blog/using-remoteio-audio-unit/

static OSStatus recordingCallback(void *inRefCon, 
                              AudioUnitRenderActionFlags *ioActionFlags, 
                              const AudioTimeStamp *inTimeStamp, 
                              UInt32 inBusNumber, 
                              UInt32 inNumberFrames, 
                              AudioBufferList *ioData) 

// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample

AudioBuffer buffer;

buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );

// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;

// Then:
// Obtain recorded samples

OSStatus status;

status = AudioUnitRender([iosAudio audioUnit], 
                         ioActionFlags, 
                         inTimeStamp, 
                         inBusNumber, 
                         inNumberFrames, 
                         &bufferList);
checkStatus(status);

// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];

// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);

return noErr;

【讨论】:

以上是关于设置效果 AudioUnit的主要内容,如果未能解决你的问题,请参考以下文章

PPT幻灯片中图表动画播放效果为:出现、按序列怎么设置?

Java中如何设置文字闪烁效果

axure怎么设置拖动效果

如何给Imageview 设置水波纹效果

微信动画效果怎么设置

安卓手机下雪效果怎么设置?