Core Audio的渲染回调不会改变输出音频[重复]
Posted
技术标签:
【中文标题】Core Audio的渲染回调不会改变输出音频[重复]【英文标题】:Core Audio's render callback does not change output audio [duplicate] 【发布时间】:2015-08-13 19:27:34 【问题描述】:在使用核心音频时,我无法使用渲染回调更改输出或使其静音。
这是我的 IO 初始化函数:
- (void)setupIOUnit
// Create a new instance of AURemoteIO
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(comp, &rioUnit);
// Enable input and output on AURemoteIO
// Input is enabled on the input scope of the input element
// Output is enabled on the output scope of the output element
UInt32 one = 1;
AudioUnitSetProperty(rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one));
AudioUnitSetProperty(rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one));
AudiostreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
AudioUnitSetProperty(rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &audioFormat, sizeof(audioFormat));
AudioUnitSetProperty(rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &audioFormat, sizeof(audioFormat));
// Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
// of samples it will be asked to produce on any single given call to AudioUnitRender
UInt32 maxFramesPerSlice = 4096;
AudioUnitSetProperty(rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32));
// Get the property value back from AURemoteIO. We are going to use this value to allocate buffers accordingly
UInt32 propSize = sizeof(UInt32);
AudioUnitGetProperty(rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize);
// Set the render callback on AURemoteIO
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = NULL;
AudioUnitSetProperty(rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback));
NSLog(@"render set now");
// Initialize the AURemoteIO instance
AudioUnitInitialize(rioUnit);
[self startIOUnit];
return;
这是我的渲染函数:
// Render callback function
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
OSStatus err = noErr;
err = AudioUnitRender(rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
if (ioData->mBuffers[0].mDataByteSize >= 12)
NSData *myAudioData = [NSData dataWithBytes: ioData->mBuffers[0].mData length:12];
NSLog(@"aa playback's first 12 bytes: %@", myAudioData);
for (UInt32 i=0; i<ioData->mNumberBuffers; ++i)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
return err;
这不会使输出声音静音,我仍然可以从我的流媒体应用中听到声音。发生这种情况的可能情况有哪些?我的声音怎么没静音?
任何见解都会有所帮助
【问题讨论】:
【参考方案1】:您能否澄清一下您所说的“我仍然可以从我的流媒体应用程序中听到声音”的意思?您是指通过麦克风传来的声音,还是应用程序中发生了其他事情?
FWIW:您显然正在适应 Apple 的 aurioTouch 示例,如果我将您的 setupIOUnit 方法和渲染回调放入该项目并运行它,静音工作正常 - 即,您似乎没有用您的代码破坏任何东西'已经张贴在这里。这表明问题出在代码的其他地方。
【讨论】:
以上是关于Core Audio的渲染回调不会改变输出音频[重复]的主要内容,如果未能解决你的问题,请参考以下文章
从 Audio-Unit 渲染回调实时触发事件并获得严重失真