EZAudio 自定义 AudioStreamBasicDescription 没有按我的预期工作

Posted

技术标签:

【中文标题】EZAudio 自定义 AudioStreamBasicDescription 没有按我的预期工作【英文标题】:EZAudio Custom AudioStreamBasicDescription is not working as I expect 【发布时间】:2014-01-28 11:37:41 【问题描述】:

WithEZAudio 我想尽可能创建单光音频缓冲区列表。在过去,我每个 audioBuffer 达到 46 个字节,但 bufferDuration 相对较小。首先,如果我使用下面的 AudiostreamBasicDescription 进行输入和输出

AudioStreamBasicDescription audioFormat;
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;

并使用 TPCircularBuffer 作为传输器,然后我在 bufferList 中得到两个缓冲区,mDataByteSize 4096 肯定太多了。所以我尝试使用我以前的 ASBD

audioFormat.mSampleRate         = 8000.00;
audioFormat.mFormatID           = kAudioFormatLinearPCM;
audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket    = 1;
audioFormat.mChannelsPerFrame   = 1;
audioFormat.mBitsPerChannel     = 8;
audioFormat.mBytesPerPacket     = 1;
audioFormat.mBytesPerFrame      = 1;

现在 mDataByteSize 是 128,我只有一个缓冲区,但 TPCircularBuffer 无法正确处理。我认为这是因为我只想使用一个频道。所以 atm 我拒绝了 TBCB 并尝试将字节编码和解码为 NSData 或者只是为了测试直接通过 AudioBufferList 但即使对于第一个 AudioStreamBasicDescription 声音也太失真了。

我当前的代码

-(void)initMicrophone

    AudioStreamBasicDescription audioFormat;
    //*
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;

     /*/
    audioFormat.mSampleRate         = 8000.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 8;
    audioFormat.mBytesPerPacket     = 1;
    audioFormat.mBytesPerFrame      = 1;

    //*/


    _microphone = [EZMicrophone microphoneWithDelegate:self withAudioStreamBasicDescription:audioFormat];

    _output = [EZOutput outputWithDataSource:self withAudioStreamBasicDescription:audioFormat];
    [EZAudio circularBuffer:&_cBuffer withSize:128];


-(void)startSending
    [_microphone startFetchingAudio];
    [_output startPlayback];


-(void)stopSending
    [_microphone stopFetchingAudio];
    [_output stopPlayback];


-(void)microphone:(EZMicrophone *)microphone
 hasAudioReceived:(float **)buffer
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
    dispatch_async(dispatch_get_main_queue(), ^
    );


-(void)microphone:(EZMicrophone *)microphone
    hasBufferList:(AudioBufferList *)bufferList
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
//*
        abufferlist = bufferList;
    /*/
     audioBufferData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
     //*/
 dispatch_async(dispatch_get_main_queue(), ^
 );

-(AudioBufferList*)output:(EZOutput *)output needsBufferListWithFrames:(UInt32)frames withBufferSize:(UInt32 *)bufferSize
    //*
    return abufferlist;
    /*/
     //    int bSize = 128;
     //    AudioBuffer audioBuffer;
     //    audioBuffer.mNumberChannels = 1;
     //    audioBuffer.mDataByteSize = bSize;
     //    audioBuffer.mData = malloc(bSize);
     ////    [audioBufferData getBytes:audioBuffer.mData length:bSize];
     //    memcpy(audioBuffer.mData, [audioBufferData bytes], bSize);
     //
     //
     //    AudioBufferList *bufferList = [EZAudio audioBufferList];
     //    bufferList->mNumberBuffers = 1;
     //    bufferList->mBuffers[0] = audioBuffer;
     //
     //    return bufferList;
    //*/



我知道 output:needsBufferListWithFrames:withBufferSize: 中 bSize 的值可能已更改。

我的主要目标是尽可能多地创建单声道声音,将其编码为 nsdata 并将其解码为输出。你能告诉我我做错了什么吗?

【问题讨论】:

【参考方案1】:

我有同样的问题,移到 AVAudioRecorder 并设置我需要的参数我保留 EZAudio (EZMicrophone) 用于音频可视化这里是实现此目的的链接:

iOS: Audio Recording File Format

【讨论】:

以上是关于EZAudio 自定义 AudioStreamBasicDescription 没有按我的预期工作的主要内容,如果未能解决你的问题,请参考以下文章

EZAudio 中的音频输入源

EZAudio CocoaPods 模块导入错误

EZaudio 在后台执行

如何使用 EZAudio 在 Swift 中获取 FFT 数据?

EZAudio - 如何使用 EZAudioPlotGL 根据视图大小更改相同的波形大小

AudioBufferList to float ** 转换 EZAudio EZMicrophone for Visual Plotting