IO 直接播放来自 UDP 流 (NSData) 的原始音频

Posted

技术标签:

【中文标题】IO 直接播放来自 UDP 流 (NSData) 的原始音频【英文标题】:IOs play raw audio from UDP Stream (NSData) directly out 【发布时间】:2014-11-20 11:56:35 【问题描述】:

我在服务器上记录数据并立即将它们发送给客户端。 客户端接收这样的UDP数据包:

(void)udpSocket:(GCDAsyncUdpSocket *)sock didReceiveData:**(NSData *)data** fromAddress:(NSData *)address withFilterContext:(id)filterContext
    
if (!isRunning) return;
if (data)
        

else
      


现在原始数据在 data 变量中。我想马上玩。我真的在这个问题上坐了 2 天......我只想要一些简单的东西,比如 Java 中的音轨。我读了很多关于音频队列等的内容,但仍然不明白。你能给我一个提示,但请以代码的形式。在我看来,我检查了每个站点-.- 查找每个示例,但不理解它们。回调函数在一些缓冲区被填充后启动(在许多示例中),但我不明白如何用我的 NSData 填充它们。

【问题讨论】:

【参考方案1】:

我有兴趣听到这个问题的答案。我的解决方案是使用 OpenAL 在 ios 中制作我自己的音频服务器,它开箱即用只呈现音频缓冲区 - 基本上有一个线程句柄消耗从服务器发送的音频流 - 另一个线程运行你自己的 OpenAL 服务器我在这里概述:

#import <OpenAL/al.h>
#import <OpenAL/alc.h>
#import <AudioToolbox/ExtendedAudioFile.h>


-(void) init_openal 

openal_device = alcOpenDevice(NULL);

if (openal_device != NULL) 

    // create context

    openal_context = alcCreateContext(openal_device, 0);

    if (openal_context != NULL) 

        // activate this new context

        alcMakeContextCurrent(openal_context);

     else 

        NSLog(@"STR_OPENAL ERROR - failed to create context");
        return;
    

 else 

    NSLog(@"STR_OPENAL ERROR - failed to get audio device");
    return;


alGenBuffers(MAX_OPENAL_QUEUE_BUFFERS, available_AL_buffer_array);  // allocate the buffer array to given number of buffers

alGenSources(1, & streaming_source);

printf("STR_OPENAL  streaming_source starts with %u\n", streaming_source);

printf("STR_OPENAL  initialization of available_AL_buffer_array_curr_index to 0\n");

available_AL_buffer_array_curr_index = 0;

self.previous_local_lpcm_buffer = 0;

[self get_next_buffer];

   //  init_openal 



// calling init and retrieving buffers logic left out goes here 

-(void) inner_run 

ALenum al_error;

// UN queue used buffers

ALint buffers_processed = 0;

alGetSourcei(streaming_source, AL_BUFFERS_PROCESSED, & buffers_processed);   // get source parameter num used buffs

while (buffers_processed > 0)      // we have a consumed buffer so we need to replenish 

    NSLog(@"STR_OPENAL inner_run seeing consumed buffer");

    ALuint unqueued_buffer;

    alSourceUnqueueBuffers(streaming_source, 1, & unqueued_buffer);

    // about to decrement available_AL_buffer_array_curr_index 

    available_AL_buffer_array_curr_index--;

    printf("STR_OPENAL   to NEW %d  with unqueued_buffer %d\n",
           available_AL_buffer_array_curr_index,
           unqueued_buffer);

    available_AL_buffer_array[available_AL_buffer_array_curr_index] = unqueued_buffer;

    buffers_processed--;


// queue UP fresh buffers

if (available_AL_buffer_array_curr_index >= MAX_OPENAL_QUEUE_BUFFERS) 

    printf("STR_OPENAL about to sleep since internal OpenAL queue is full\n");

    [NSThread sleepUntilDate:[NSDate dateWithTimeIntervalSinceNow: SLEEP_ON_OPENAL_QUEUE_FULL]];

 else 

    NSLog(@"STR_OPENAL YYYYYYYYY available_AL_buffer_array_curr_index %d    MAX_OPENAL_QUEUE_BUFFERS %d",
          available_AL_buffer_array_curr_index,
          MAX_OPENAL_QUEUE_BUFFERS
          );

    ALuint curr_audio_buffer = available_AL_buffer_array[available_AL_buffer_array_curr_index];

    ALsizei size_buff;
    ALenum data_format;
    ALsizei sample_rate;

    size_buff = MAX_SIZE_CIRCULAR_BUFFER;   // works nicely with 1016064

    sample_rate = lpcm_output_sampling_frequency;
    data_format = AL_FORMAT_STEREO16;     // AL_FORMAT_STEREO16 == 4355 ( 0x1103 )  ---  AL_FORMAT_MONO16

    printf("STR_OPENAL  curr_audio_buffer is %u    data_format %u    size_buff %u\n",
           curr_audio_buffer,
           data_format,
           size_buff
           );


    // write_output_file([TS_ONLY_delete_this_var_temp_aif_fullpath
    // cStringUsingEncoding:NSUTF8StringEncoding], curr_lpcm_buffer, 
    // curr_lpcm_buffer_sizeof);


    if (self.local_lpcm_buffer == self.previous_local_lpcm_buffer) 

        printf("STR_OPENAL NOTICE - need to throttle up openal sleep duration seeing same value for local_lpcm_buffer %d - so will skip loading into alBufferData\n",
               (int) self.local_lpcm_buffer);

     else 


        NSLog(@"STR_OPENAL  about to call alBufferData curr_audio_buffer %d local_lpcm_buffer address %d local_aac_index %d",
              curr_audio_buffer,
              (int) self.local_lpcm_buffer,
              self.local_aac_index);

        // copy audio data into curr_buffer

        alBufferData(curr_audio_buffer, data_format, self.local_lpcm_buffer, size_buff, sample_rate); // curr_audio_buffer is an INT index determining which buffer to use

        self.previous_local_lpcm_buffer = self.local_lpcm_buffer;

        alSourceQueueBuffers(streaming_source, 1, & curr_audio_buffer);

        printf("STR_OPENAL  about to increment available_AL_buffer_array_curr_index from OLD %d",
           available_AL_buffer_array_curr_index);

        available_AL_buffer_array_curr_index++;

        printf("STR_OPENAL  available_AL_buffer_array_curr_index to NEW %d\n", available_AL_buffer_array_curr_index);
    

    al_error = alGetError();
    if(AL_NO_ERROR != al_error)
    
        NSLog(@"STR_OPENAL ERROR - alSourceQueueBuffers error: %s", alGetString(al_error));
        return;
    

    ALenum current_playing_state;

    alGetSourcei(streaming_source, AL_SOURCE_STATE, & current_playing_state); // get source parameter STATE

    al_error = alGetError();
    if(AL_NO_ERROR != al_error)
    
        NSLog(@"STR_OPENAL ERROR - alGetSourcei error: %s", alGetString(al_error));
        return;
    

    if (AL_PLAYING != current_playing_state) 

        ALint buffers_queued = 0;

        alGetSourcei(streaming_source, AL_BUFFERS_QUEUED, & buffers_queued); // get source parameter num queued buffs

        NSLog(@"STR_OPENAL NOTICE - play is NOT AL_PLAYING: %x, buffers_queued: %d", current_playing_state, buffers_queued);

        if (buffers_queued > 0 && NO == self.streaming_paused) 

            // restart play

            NSLog(@"STR_OPENAL about to restart play");

            alSourcePlay(streaming_source);

            al_error = alGetError();
            if (AL_NO_ERROR != al_error) 

                NSLog(@"STR_OPENAL ERROR - alSourcePlay error: %s", alGetString(al_error));
            
        
    


    if (self.last_aac_index == self.local_aac_index && available_AL_buffer_array_curr_index == 0) 

        NSLog(@"STR_OPENAL reached end of event tell parent");

        [self send_running_condition_message_to_parent: rendered_last_buffer];

        flag_continue_running = false; // terminate since all rendering work is done

     else 

        [self get_next_buffer];
    

     //      inner_run

【讨论】:

您的代码:D 但 openAL api 更难理解。所以你有你的 streaming_source 并将它传递给 unqueued_buffer?我试图理解这段代码,但我必须对 OpenAL 有较低(更好的 0 )经验。有什么“更简单”的建议吗?【参考方案2】:

我认为对此没有开箱即用的 iOS 解决方案。尝试深入研究 CoreAudio 框架。 或者找一些现成的库StreamingKit Library

【讨论】:

以上是关于IO 直接播放来自 UDP 流 (NSData) 的原始音频的主要内容,如果未能解决你的问题,请参考以下文章

如何将 H.264 UDP 数据包转换为可播放的媒体流或文件(碎片整理)

使用 NSData 解决生产者-消费者问题(用于音频流)

ffmpeg:播放 udp 流

如何从 NSData 播放视频

来自 CMSampleBufferRef 的 NSData 或字节

ffmpeg udp 直播发布到 rtmp