如何从外部音频接口访问带有核心音频的各个通道
Posted
技术标签:
【中文标题】如何从外部音频接口访问带有核心音频的各个通道【英文标题】:How to access individual channels with core audio from an external audio interface 【发布时间】:2016-06-03 05:26:35 【问题描述】:我正在学习 CoreAudio,我只是在浏览 Apple 文档中的一些示例,并弄清楚如何设置以及不设置什么。到目前为止,我能够连接到默认连接的音频输入设备并将其输出到默认输出设备。我连接了一个 2 通道接口,并且能够从它输出输入并输出它。
但是,我正在搜索他们的 API 参考和示例,但找不到任何实质性的东西可以从我的界面访问各个输入通道。
我能够在我的渲染回调函数中破解并从 AudioBufferList 中提取样本并以这种方式进行操作,但我想知道是否有正确的方式或更正式的方式来访问来自每个单独输入通道的数据.
编辑:
这是我从我正在使用的示例中找到的用户数据:
typedef struct MyAUGraphplayer
AudiostreamBasicDescription streamFormat;
AUGraph graph;
AudioUnit inputUnit;
AudioUnit outputUnit;
AudioBufferList * inputBuffer;
CARingBuffer * ringBuffer;
Float64 firstInputSampleTime;
Float64 firstOutputSampleTime;
Float64 inToOutSampleTimeOffset;
MyAUGraphPlayer;
这就是我设置输入单元的方式:
void CreateInputUnit(MyAUGraphPlayer * player)
//Generates a description that matches audio HAL
AudioComponentDescription inputcd = 0;
inputcd.componentType = kAudioUnitType_Output;
inputcd.componentSubType = kAudioUnitSubType_HALOutput;
inputcd.componentManufacturer = kAudioUnitManufacturer_Apple;
UInt32 deviceCount = AudioComponentCount ( &inputcd );
printf("Found %d devices\n", deviceCount);
AudioComponent comp = AudioComponentFindNext(NULL, &inputcd);
if(comp == NULL)
printf("Can't get output unit\n");
exit(1);
OSStatus status;
status = AudioComponentInstanceNew(comp, &player->inputUnit);
assert(status == noErr);
//Explicitly enable Input and disable output
UInt32 disableFlag = 0;
UInt32 enableFlag = 1;
AudioUnitScope outputBus = 0;
AudioUnitScope inputBus = 1;
status = AudioUnitSetProperty(player->inputUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
inputBus,
&enableFlag,
sizeof(enableFlag))
assert(status == noErr);
status = AudioUnitSetProperty(player->inputUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
outputBus,
&disableFlag,
sizeof(enableFlag));
assert(status == noErr);
printf("Finished enabling input and disabling output on an inputUnit\n");
//Get the default Audio input Device
AudioDeviceID defaultDevice = kAudioObjectUnknown;
UInt32 propertySize = sizeof(defaultDevice);
AudioObjectPropertyAddress defaultDeviceProperty;
defaultDeviceProperty.mSelector = kAudioHardwarePropertyDefaultInputDevice;
defaultDeviceProperty.mScope = kAudioObjectPropertyScopeGlobal;
defaultDeviceProperty.mElement = kAudioObjectPropertyElementMaster;
status = AudioObjectGetPropertyData(kAudioObjectSystemObject,
&defaultDeviceProperty,
0,
NULL,
&propertySize,
&defaultDevice);
assert(status == noErr);
//Set the current device property of the AUHAL
status = AudioUnitSetProperty(player->inputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
outputBus,
&defaultDevice,
sizeof(defaultDevice));
assert(status == noErr);
//Get the AudioStreamBasicDescription from Input AUHAL
propertySize = sizeof(AudioStreamBasicDescription);
status = AudioUnitGetProperty(player->inputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&player->streamFormat,
&propertySize);
assert(status == noErr);
//Adopt hardware input sample rate
AudioStreamBasicDescription deviceFormat;
status = AudioUnitGetProperty(player->inputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
inputBus,
&deviceFormat,
&propertySize);
assert(status == noErr);
player->streamFormat.mSampleRate = deviceFormat.mSampleRate;
printf("Sample Rate %f...\n", deviceFormat.mSampleRate);
propertySize = sizeof(AudioStreamBasicDescription);
status = AudioUnitSetProperty(player->inputUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputBus,
&player->streamFormat,
propertySize);
assert(status == noErr);
//Calculating Capture buffer size for an I/O unit
UInt32 bufferSizeFrames = 0;
propertySize = sizeof(UInt32);
status = AudioUnitGetProperty(player->inputUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&bufferSizeFrames,
&propertySize);
assert(status == noErr);
UInt32 bufferSizeBytes = bufferSizeFrames * sizeof(Float32);
//Create AudioBufferList to receive capture data
UInt32 propSize = offsetof(AudioBufferList, mBuffers[0]) +
(sizeof(AudioBuffer) * player->streamFormat.mChannelsPerFrame);
//Malloc buffer lists
player->inputBuffer = (AudioBufferList *) malloc(propSize);
player->inputBuffer->mNumberBuffers = player->streamFormat.mChannelsPerFrame;
//Pre malloc buffers for AudioBufferLists
for(UInt32 i = 0; i < player->inputBuffer->mNumberBuffers; i++)
player->inputBuffer->mBuffers[i].mNumberChannels = 1;
player->inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
player->inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
//Create the ring buffer
player->ringBuffer = new CARingBuffer();
player->ringBuffer->Allocate(player->streamFormat.mChannelsPerFrame,
player->streamFormat.mBytesPerFrame,
bufferSizeFrames * 3);
printf("Number of channels: %d\n", player->streamFormat.mChannelsPerFrame);
printf("Number of buffers: %d\n", player->inputBuffer->mNumberBuffers);
//Set render proc to supply samples
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = InputRenderProc;
callbackStruct.inputProcRefCon = player;
status = AudioUnitSetProperty(player->inputUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
0,
&callbackStruct,
sizeof(callbackStruct);
assert(status == noErr);
status = AudioUnitInitialize(player->inputUnit);
assert(status == noErr);
player->firstInputSampleTime = -1;
player->inToOutSampleTimeOffset = -1;
printf("Finished CreateInputUnit()\n");
所以这是我访问各个缓冲区的渲染回调函数。 :
OSStatus GraphRenderProc(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
MyAUGraphPlayer * player = (MyAUGraphPlayer *) inRefCon;
if(player->firstOutputSampleTime < 0.0)
player->firstOutputSampleTime = inTimeStamp->mSampleTime;
if((player->firstInputSampleTime > -1.0) &&
(player->inToOutSampleTimeOffset < 0.0))
player->inToOutSampleTimeOffset = player->firstInputSampleTime - player->firstOutputSampleTime;
//Copy samples out of ring buffer
OSStatus outputProcErr = noErr;
outputProcErr = player->ringBuffer->Fetch(ioData,
inNumberFrames,
inTimeStamp->mSampleTime + player->inToOutSampleTimeOffset);
//BUT THIS IS NOT HOW IT IS SUPPOSED TO WORK
Float32 * data = (Float32 *) ioData->mBuffers[0].mData;
Float32 * data2 = (Float32 *) ioData->mBuffers[1].mData;
for(int frame = 0; frame < inNumberFrames; frame++)
Float32 sample = data[frame] + data2[frame];
data[frame] = data2[frame] = sample;
return outputProcErr;
【问题讨论】:
如果你没有发布至少一个你所做的例子,你会让人们猜测和判断你的措辞...... 我添加了一些示例代码。如果您还需要什么,请告诉我 【参考方案1】:虽然您的代码对于它似乎要管理的任务来说看起来过于复杂,但我会尝试回答您的问题:
您在回调中检索示例数据的概念没有任何问题。如果处理多声道音频设备,这将是不够的。设备有多少个频道,以及您通过AudioStreamBasicDescription
查询给定设备的频道布局、格式等。此属性用于初始化处理链的其余部分。您在初始化时分配音频缓冲区,或者让程序为您完成(请阅读文档)。
如果您发现使用额外的缓冲区复制到仅用于数据处理和 DSP 更方便,您可以在回调中管理它,如下所示(简化代码):
Float32 buf[streamFormat.mChanelsPerFrame][inNumberFrames];
for(int ch; ch < streamFormat.mChanelsPerFrame; ch++)
Float32 data = (Float32 *)ioData->mBuffers[ch].mData;
memcpy(buf[ch], data, inNumberFrames*sizeof(Float32));
【讨论】:
只是为了确认,“如果处理多声道音频设备是不够的”,你的意思是如果我有一个 8 声道输入接口,我只想输出到立体声,那不是推荐的方式? 哪个立体声?如果您想编写一个好的程序,您永远不知道外部设备将提供多少个通道,哪些通道可以有输入信号,哪些信号会输出。为了简单起见,您可能想从 #7 和 #12 中制作一个立体声。 (-: 对不起,如果我不清楚。我想看看是否有推荐的方法来创建一个 DAW 应用程序,如garageband 或关于与外部音频接口接口的逻辑。 恐怕您混淆了录音轨道、音频通道和虚拟轨道。至于 audio channels,上述方法是我所知道的唯一记录在案的方法。至于管理 track-objects、track-to-channel 映射矩阵、per-track-plug-in-chain-hosting、cross-track-messaging API,我无法给出答案。 “LogicPro”和“GarageBand”的源代码尚未公布。如果您打算开发商业 DAW,有很多方法可以学习它,但这已经是 business,而不是 SO,恕我直言。也许已经发布了 AULab.app 的源代码。我不确定。 啊,是的,我混淆了这三个术语。您介意快速解释这 3 个术语吗?我是否正确假设“音频通道”是指来自外部硬件的输入?如果是这样,那么(如果我理解正确的话)单独访问每个缓冲区是从外部音频接口的每个单独输入中访问样本的唯一方法?我没有开发 DAW 的计划,因为我知道那只是重新发明*** :)。我喜欢处理音频,只是想了解更多。以上是关于如何从外部音频接口访问带有核心音频的各个通道的主要内容,如果未能解决你的问题,请参考以下文章
CoreAudio Input Render 回调从外部音频接口 Mac OS 10.14 Mojave 渲染所有 0