windows core audio apis

Posted 安子

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了windows core audio apis相关的知识,希望对你有一定的参考价值。

这个播放流程有一次当初不是很理解,做个记录,代码中的中文部分,原文档是有解释的:
To move a stream of rendering data through the endpoint buffer, the client alternately calls the IAudioRenderClient::GetBuffer method and theIAudioRenderClient::ReleaseBuffer method. The client accesses the data in the endpoint buffer as a series of data packets. The GetBuffer call retrieves the next packet so that the client can fill it with rendering data. After writing the data to the packet, the client calls ReleaseBuffer to add the completed packet to the rendering queue.
看最后一句,客户端通过执行ReleaseBuffer,来把已完成的数据包追加到渲染队列去,除了这个我估计还会做一些Realse工作比如修改某个状态等。
//----------------------------------------------------------- // Play an audio stream on the default audio rendering // device. The PlayAudiostream function allocates a shared // buffer big enough to hold one second of PCM audio data. // The function uses this buffer to stream data to the // rendering device. The inner loop runs every 1/2 second. //----------------------------------------------------------- // REFERENCE_TIME time units per second and per millisecond #define REFTIMES_PER_SEC 10000000 #define REFTIMES_PER_MILLISEC 10000 #define EXIT_ON_ERROR(hres) if (FAILED(hres)) { goto Exit; } #define SAFE_RELEASE(punk) if ((punk) != NULL) { (punk)->Release(); (punk) = NULL; } const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator); const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator); const IID IID_IAudioClient = __uuidof(IAudioClient); const IID IID_IAudioRenderClient = __uuidof(IAudioRenderClient); HRESULT PlayAudioStream(MyAudioSource *pMySource) { HRESULT hr; REFERENCE_TIME hnsRequestedDuration = REFTIMES_PER_SEC; REFERENCE_TIME hnsActualDuration; IMMDeviceEnumerator *pEnumerator = NULL; IMMDevice *pDevice = NULL; IAudioClient *pAudioClient = NULL; IAudioRenderClient *pRenderClient = NULL; WAVEFORMATEX *pwfx = NULL; UINT32 bufferFrameCount; UINT32 numFramesAvailable; UINT32 numFramesPadding; BYTE *pData; DWORD flags = 0; hr = CoCreateInstance( CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, IID_IMMDeviceEnumerator, (void**)&pEnumerator); EXIT_ON_ERROR(hr) hr = pEnumerator->GetDefaultAudioEndpoint( eRender, eConsole, &pDevice); EXIT_ON_ERROR(hr) hr = pDevice->Activate( IID_IAudioClient, CLSCTX_ALL, NULL, (void**)&pAudioClient); EXIT_ON_ERROR(hr) hr = pAudioClient->GetMixFormat(&pwfx); EXIT_ON_ERROR(hr) hr = pAudioClient->Initialize( AUDCLNT_SHAREMODE_SHARED, 0, hnsRequestedDuration, 0, pwfx, NULL); EXIT_ON_ERROR(hr) // Tell the audio source which format to use. hr = pMySource->SetFormat(pwfx); EXIT_ON_ERROR(hr) // Get the actual size of the allocated buffer. hr = pAudioClient->GetBufferSize(&bufferFrameCount); EXIT_ON_ERROR(hr) hr = pAudioClient->GetService( IID_IAudioRenderClient, (void**)&pRenderClient); EXIT_ON_ERROR(hr) //////从这一直到下面的结束,其实发现,还要修改播放缓冲,就得固定的需要三个步骤,主要这里省的以后去分析他为什么这么做,其实这就是他固定的过程 // Grab the entire buffer for the initial fill operation. hr = pRenderClient->GetBuffer(bufferFrameCount, &pData); EXIT_ON_ERROR(hr) // Load the initial data into the shared buffer. hr = pMySource->LoadData(bufferFrameCount, pData, &flags); EXIT_ON_ERROR(hr) hr = pRenderClient->ReleaseBuffer(bufferFrameCount, flags); EXIT_ON_ERROR(hr) //////结束 // Calculate the actual duration of the allocated buffer. hnsActualDuration = (double)REFTIMES_PER_SEC * bufferFrameCount / pwfx->nSamplesPerSec; hr = pAudioClient->Start(); // Start playing. EXIT_ON_ERROR(hr) // Each loop fills about half of the shared buffer. while (flags != AUDCLNT_BUFFERFLAGS_SILENT) { // Sleep for half the buffer duration. Sleep((DWORD)(hnsActualDuration/REFTIMES_PER_MILLISEC/2)); // See how much buffer space is available. hr = pAudioClient->GetCurrentPadding(&numFramesPadding); EXIT_ON_ERROR(hr) numFramesAvailable = bufferFrameCount - numFramesPadding; // Grab all the available space in the shared buffer. hr = pRenderClient->GetBuffer(numFramesAvailable, &pData); EXIT_ON_ERROR(hr) // Get next 1/2-second of data from the audio source. hr = pMySource->LoadData(numFramesAvailable, pData, &flags); EXIT_ON_ERROR(hr) hr = pRenderClient->ReleaseBuffer(numFramesAvailable, flags); EXIT_ON_ERROR(hr) } // Wait for last data in buffer to play before stopping. Sleep((DWORD)(hnsActualDuration/REFTIMES_PER_MILLISEC/2)); hr = pAudioClient->Stop(); // Stop playing. EXIT_ON_ERROR(hr) Exit: CoTaskMemFree(pwfx); SAFE_RELEASE(pEnumerator) SAFE_RELEASE(pDevice) SAFE_RELEASE(pAudioClient) SAFE_RELEASE(pRenderClient) return hr; }

以上是关于windows core audio apis的主要内容,如果未能解决你的问题,请参考以下文章

WebRTC Windows Native音频中的Core Audio API

Windows Core Audio Api 在捕获设备上获取所有支持的格式

Core Audio (WASAPI) 缓冲区事件计时

将 RtAudio 与 Core-Audio 一起使用,probeDeviceOpen 函数失败

Core Audio - 远程 IO 混乱

如何使用 Core Audio 设置/获取音量级别?