Android系统音频模块-Native层初始化工作

Posted Nipuream

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Android系统音频模块-Native层初始化工作相关的知识,希望对你有一定的参考价值。

在前面一篇的文章中,我们知道了音频模块Java层所做的一些事情,总的来说还是比较简单的,下面我们继续学习和探索Native层中系统做了什么工作,首先先简单介绍下,Native层采用了C/S的架构方式,AudioTrack是属于Client端的,AudioFlinger和AudioPolicyService是属于服务端的,AudioFlinger是唯一可以调用HAL层接口的地方,而AudioPolicyService主要是对一些音频策略的选择和一些逻辑调用的中转,下面先来看看Client端AudioTrack都做了什么。

首先需要注意的是,我们本篇文章只描述Native层代码的初始化工作,因为整个Native层的代码和逻辑是整个音频模块的核心,想要在一篇文章中讲述完整那是不可能的,其中的很多地方的知识点都可以单独拿出来写一篇文章。

1. AudioTrack的初始化工作

在Java层JNI方法中,我们得知不仅通过AudioTrack的无参构造方法生成AudioTrack,还调用了它的set()方法,下面是一些重要的代码片段,它所在的路径是:/framework/av/media/libmedia .

//AudioTrack的无参构造方法
AudioTrack::AudioTrack()
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(android_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0),
      mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE)

    mAttributes.content_type = AUDIO_CONTENT_TYPE_UNKNOWN;
    mAttributes.usage = AUDIO_USAGE_UNKNOWN;
    mAttributes.flags = 0x0;
    strcpy(mAttributes.tags, "");


status_t AudioTrack::set(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        const sp<IMemory>& sharedBuffer,
        bool threadCanCallJava,
        int sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        bool doNotReconnect)



   //校验数据...(省略)

   //对参数的处理和初始化工作...(省略)

    // validate parameters
    if (!audio_is_valid_format(format)) 
        ALOGE("Invalid format %#x", format);
        return BAD_VALUE;
    
    mFormat = format;

    if (!audio_is_output_channel(channelMask)) 
        ALOGE("Invalid channel mask %#x", channelMask);
        return BAD_VALUE;
    

    //确定声道数
    mChannelMask = channelMask;
    uint32_t channelCount = audio_channel_count_from_out_mask(channelMask);
    mChannelCount = channelCount;

     //根据flags 确定framesize 大小
    if (flags & AUDIO_OUTPUT_FLAG_DIRECT) 
        if (audio_is_linear_pcm(format)) 
            mFrameSize = channelCount * audio_bytes_per_sample(format);
         else 
            mFrameSize = sizeof(uint8_t);
        
     else 
        ALOG_ASSERT(audio_is_linear_pcm(format));
        mFrameSize = channelCount * audio_bytes_per_sample(format);
        // createTrack will return an error if PCM format is not supported by server,
        // so no need to check for specific PCM formats here
    

    // sampling rate must be specified for direct outputs
    if (sampleRate == 0 && (flags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) 
        return BAD_VALUE;
    
    mSampleRate = sampleRate;
    mOriginalSampleRate = sampleRate;
    mPlaybackRate = AUDIO_PLAYBACK_RATE_DEFAULT;

    mVolume[AUDIO_INTERLEAVE_LEFT] = 1.0f;
    mVolume[AUDIO_INTERLEAVE_RIGHT] = 1.0f;
    mSendLevel = 0.0f;
    // mFrameCount is initialized in createTrack_l
    mReqFrameCount = frameCount;
    mNotificationFramesReq = notificationFrames;
    mNotificationFramesAct = 0;
    // ...
    mAuxEffectId = 0;
    mFlags = flags;
    mCbf = cbf;

    //1. 如果Java传入的音频状态回调对象不为空,则开启线程处理回调
    if (cbf != NULL) 
        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
        // thread begins in paused state, and will not reference us until start()
    

    //2. 创建 sp<IAudioTrack> 对象,主要和服务端进行通信
    status_t status = createTrack_l();


    //... 参数的初始化,赋值工作

    return NO_ERROR;

这个方法实在太长了,有些不重要的我删除了,还对一些语句做了备注,最为主要的我用序列标出来了,下面就详细学习下这两步到底做了什么。

1.1. Native层状态回调

在我们AudioTrack调用set()方法中,创建了一条名叫AudioTrackThread线程,我们来看下它的数据结构:

    /* a small internal class to handle the callback */
    class AudioTrackThread : public Thread
    
    public:
        AudioTrackThread(AudioTrack& receiver, bool bCanCallJava = false);

        // Do not call Thread::requestExitAndWait() without first calling requestExit().
        // Thread::requestExitAndWait() is not virtual, and the implementation doesn't do enough.
        virtual void        requestExit();

                void        pause();    // suspend thread from execution at next loop boundary
                void        resume();   // allow thread to execute, if not requested to exit
                void        wake();     // wake to handle changed notification conditions.

    private:
                void        pauseInternal(nsecs_t ns = 0LL);
                                        // like pause(), but only used internally within thread

        friend class AudioTrack;
        virtual bool        threadLoop();
        AudioTrack&         mReceiver;
        virtual ~AudioTrackThread();
        Mutex               mMyLock;    // Thread::mLock is private
        Condition           mMyCond;    // Thread::mThreadExitedCondition is private
        bool                mPaused;    // whether thread is requested to pause at next loop entry
        bool                mPausedInt; // whether thread internally requests pause
        nsecs_t             mPausedNs;  // if mPausedInt then associated timeout, otherwise ignored
        bool                mIgnoreNextPausedInt;   // skip any internal pause and go immediately
                                        // to processAudioBuffer() as state may have changed
                                        // since pause time calculated.
    ;

我们可以看到它继承于Native层的Thread,其实在调用AudioTrackThread->run()的时候,最终会调用到它的threadLooper(),我们可以看下Thread.h的实现类Threads.cpp,代码位于:

./system/core/libutils/Threads.cpp

status_t Thread::run(const char* name, int32_t priority, size_t stack)

    Mutex::Autolock _l(mLock);

    //...
    bool res;
    //mCanCallJava表示是否需要attach虚拟机,这里是false.
    if (mCanCallJava) 
        res = createThreadEtc(_threadLoop,
                this, name, priority, stack, &mThread);
     else 
        res = androidCreateRawThreadEtc(_threadLoop,
                this, name, priority, stack, &mThread);
    
    //...


int androidCreateRawThreadEtc(android_thread_func_t entryFunction,
                               void *userData,
                               const char* threadName __android_unused,
                               int32_t threadPriority,
                               size_t threadStackSize,
                               android_thread_id_t *threadId)

    pthread_attr_t attr;
    pthread_attr_init(&attr);
    pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);

    //...

    errno = 0;
    pthread_t thread;
    //创建 unix thread. 可以看出 entryFunction就是上面方法传下的 _threadLoop
    int result = pthread_create(&thread, &attr,
                    (android_pthread_entry)entryFunction, userData);
    pthread_attr_destroy(&attr);
    if (result != 0) 
        ALOGE("androidCreateRawThreadEtc failed (entry=%p, res=%d, errno=%d)\\n"
             "(android threadPriority=%d)",
            entryFunction, result, errno, threadPriority);
        return 0;
    

    // Note that *threadID is directly available to the parent only, as it is
    // assigned after the child starts.  Use memory barrier / lock if the child
    // or other threads also need access.
    if (threadId != NULL) 
        *threadId = (android_thread_id_t)thread; // XXX: this is not portable
    
    return 1;


int Thread::_threadLoop(void* user)

    Thread* const self = static_cast<Thread*>(user);

    //...

    bool first = true;

    //可以从下面代码看出,Thread线程会一直调用 threadloop 方法,当然这个循环也会阻塞,用于
    //和其他线程之间的协调工作.
    do 
        bool result;
        if (first) 
            first = false;
            self->mStatus = self->readyToRun();
            result = (self->mStatus == NO_ERROR);

            if (result && !self->exitPending()) 
                // Binder threads (and maybe others) rely on threadLoop
                // running at least once after a successful ::readyToRun()
                // (unless, of course, the thread has already been asked to exit
                // at that point).
                // This is because threads are essentially used like this:
                //   (new ThreadSubclass())->run();
                // The caller therefore does not retain a strong reference to
                // the thread and the thread would simply disappear after the
                // successful ::readyToRun() call instead of entering the
                // threadLoop at least once.
                result = self->threadLoop();
            
         else 
            result = self->threadLoop();
        

        // establish a scope for mLock
        
        Mutex::Autolock _l(self->mLock);
        if (result == false || self->mExitPending) 
            self->mExitPending = true;
            self->mRunning = false;
            // clear thread ID so that requestExitAndWait() does not exit if
            // called by a new thread using the same thread ID as this one.
            self->mThread = thread_id_t(-1);
            // note that interested observers blocked in requestExitAndWait are
            // awoken by broadcast, but blocked on mLock until break exits scope
            self->mThreadExitedCondition.broadcast();
            break;
        
        

        // Release our strong reference, to let a chance to the thread
        // to die a peaceful death.
        strong.clear();
        // And immediately, re-acquire a strong reference for the next loop
        strong = weak.promote();
     while(strong != 0);

    return 0;

从上面一大片代码的分析,得出最后我们只要去分析 threadloop() 方法的实现即可,那么继续分析回调AudioTrack.cpp中对AudioTrackThread的threadloop()的实现:

bool AudioTrack::AudioTrackThread::threadLoop()

    
        AutoMutex _l(mMyLock);
        if (mPaused) 
            mMyCond.wait(mMyLock);
            // caller will check for exitPending()
            return true;
        
        if (mIgnoreNextPausedInt) 
            mIgnoreNextPausedInt = false;
            mPausedInt = false;
        
        if (mPausedInt) 
            if (mPausedNs > 0) 
                (void) mMyCond.waitRelative(mMyLock, mPausedNs);
             else 
                mMyCond.wait(mMyLock);
            
            mPausedInt = false;
            return true;
        
    
    if (exitPending()) 
        return false;
    
    nsecs_t ns = mReceiver.processAudioBuffer();
    switch (ns) 
    case 0:
        return true;
    case NS_INACTIVE:
        pauseInternal();
        return true;
    case NS_NEVER:
        return false;
    case NS_WHENEVER:
        // Event driven: call wake() when callback notifications conditions change.
        ns = INT64_MAX;
        // fall through
    default:
        LOG_ALWAYS_FATAL_IF(ns < 0, "processAudioBuffer() returned %" PRId64, ns);
        pauseInternal(ns);
        return true;
    

其中很重要的一句mReceiver.processAudioBuffer(),其中的mReceiver就是AudioTrack本身,所以,go on …

nsecs_t AudioTrack::processAudioBuffer()

    //检测 mCblk 是否为空,mCblk 对应的是Java空间的回调方法
    LOG_ALWAYS_FATAL_IF(mCblk == NULL);

    mLock.lock();

    //...

    // Can only reference mCblk while locked
    int32_t flags = android_atomic_and(
        ~(CBLK_UNDERRUN | CBLK_LOOP_CYCLE | CBLK_LOOP_FINAL | CBLK_BUFFER_END), &mCblk->mFlags);

    // Check for track invalidation
    if (flags & CBLK_INVALID) 
        // for offloaded tracks restoreTrack_l() will just update the sequence and clear
        // Audiosystem cache. We should not exit here but after calling the callback so
        // that the upper layers can recreate the track
        if (!isOffloadedOrDirect_l() || (mSequence == mObservedSequence)) 
            status_t status __unused = restoreTrack_l("processAudioBuffer");
            // FIXME unused status
            // after restoration, continue below to make sure that the loop and buffer events
            // are notified because they have been cleared from mCblk->mFlags above.
        
    

    //查看 音频服务端(消费者)是否是低负荷状态
    bool waitStreamEnd = mState == STATE_STOPPING;
    bool active = mState == STATE_ACTIVE;

    // Manage underrun callback, must be done under lock to avoid race with releaseBuffer()
    bool newUnderrun = false;
    if (flags & CBLK_UNDERRUN) 
#if 0
        // Currently in shared buffer mode, when the server reaches the end of buffer,
        // the track stays active in continuous underrun state.  It's up to the application
        // to pause or stop the track, or set the position to a new offset within buffer.
        // This was some experimental code to auto-pause on underrun.   Keeping it here
        // in "if 0" so we can re-visit this if we add a real sequencer for shared memory content.
        if (mTransfer == TRANSFER_SHARED) 
            mState = STATE_PAUSED;
            active = false;
        
#endif
        if (!mInUnderrun) 
            mInUnderrun = true;
            newUnderrun = true;
        
    

    //获取服务端的写指针
    size_t position = updateAndGetPosition_l();

    //检测是否达到警戒标志位
    bool markerReached = false;
    size_t markerPosition = mMarkerPosition;
    // FIXME fails for wraparound, need 64 bits
    if (!mMarkerReached && (markerPosition > 0) && (position >= markerPosition)) 
        mMarkerReached = markerReached = true;
    

    // Determine number of new position callback(s) that will be needed, while locked
    size_t newPosCount = 0;
    size_t newPosition = mNewPosition;
    size_t updatePeriod = mUpdatePeriod;
    // FIXME fails for wraparound, need 64 bits
    if (updatePeriod > 0 && position >= newPosition) 
        newPosCount = ((position - newPosition) / updatePeriod) + 1;
        mNewPosition += updatePeriod * newPosCount;
    

    // Cache other fields that will be needed soon
    uint32_t sampleRate = mSampleRate;
    float speed = mPlaybackRate.mSpeed;
    const uint32_t notificationFrames = mNotificationFramesAct;
    if (mRefreshRemaining) 
        mRefreshRemaining = false;
        mRemainingFrames = notificationFrames;
        mRetryOnPartialBuffer = false;
    
    size_t misalignment = mProxy->getMisalignment();
    uint32_t sequence = mSequence;
    sp<AudioTrackClientProxy> proxy = mProxy;

    // Determine the number of new loop callback(s) that will be needed, while locked.
    int loopCountNotifications = 0;
    uint32_t loopPeriod = 0; // time in frames for next EVENT_LOOP_END or EVENT_BUFFER_END

    //获取循环次数
    if (mLoopCount > 0) 
        int loopCount;
        size_t bufferPosition;
        mStaticProxy->getBufferPositionAndLoopCount(&bufferPosition, &loopCount);
        loopPeriod = ((loopCount > 0) ? mLoopEnd : mFrameCount) - bufferPosition;
        loopCountNotifications = min(mLoopCountNotified - loopCount, kMaxLoopCountNotifications);
        mLoopCountNotified = loopCount; // discard any excess notifications
     else if (mLoopCount < 0) 
        // FIXME: We're not accurate with notification count and position with infinite looping
        // since loopCount from server side will always return -1 (we could decrement it).
        size_t bufferPosition = mStaticProxy->getBufferPosition();
        loopCountNotifications = int((flags & (CBLK_LOOP_CYCLE | CBLK_LOOP_FINAL)) != 0);
        loopPeriod = mLoopEnd - bufferPosition;
     else if (/* mLoopCount == 0 && */ mSharedBuffer != 0) 
        size_t bufferPosition = mStaticProxy->getBufferPosition();
        loopPeriod = mFrameCount - bufferPosition;
    

    mLock.unlock();

    // get anchor time to account for callbacks.
    const nsecs_t timeBeforeCallbacks = systemTime();

    /**
     * 这里检测 音频流是否播放完,播放状态回调Java层
     */
    if (waitStreamEnd) 
        // FIXME:  Instead of blocking in proxy->waitStreamEndDone(), Callback thread
        // should wait on proxy futex and handle CBLK_STREAM_END_DONE within this function
        // (and make sure we don't callback for more data while we're stopping).
        // This helps with position, marker notifications, and track invalidation.
        struct timespec timeout;
        timeout.tv_sec = WAIT_STREAM_END_TIMEOUT_SEC;
        timeout.tv_nsec = 0;

        status_t status = proxy->waitStreamEndDone(&timeout);
        switch (status) 
        case NO_ERROR:
        case DEAD_OBJECT:
        case TIMED_OUT:
            if (status != DEAD_OBJECT) 
                //回调Java层 EVENT_STREAM_END 状态
                mCbf(EVENT_STREAM_END, mUserData, NULL);
            
            
                AutoMutex lock(mLock);
                // The previously assigned value of waitStreamEnd is no longer valid,
                // since the mutex has been unlocked and either the callback handler
                // or another thread could have re-started the AudioTrack during that time.
                waitStreamEnd = mState == STATE_STOPPING;
                if (waitStreamEnd) 
                    mState = STATE_STOPPED;
                    mReleased = 0;
                
            
            if (waitStreamEnd && status != DEAD_OBJECT) 
               return NS_INACTIVE;
            
            break;
        
        return 0;
    

   //回调之前判断的状态...

    if (newUnderrun) 
        mCbf(EVENT_UNDERRUN, mUserData, NULL);
    

    //循环播放回调
    while (loopCountNotifications > 0) 
        mCbf(EVENT_LOOP_END, mUserData, NULL);
        --loopCountNotifications;
    
    if (flags & CBLK_BUFFER_END) 
        mCbf(EVENT_BUFFER_END, mUserData, NULL);
    
    if (markerReached) 
        mCbf(EVENT_MARKER, mUserData, &markerPosition);
    

    //播放多少音频帧回调一次
    while (newPosCount > 0) 
        size_t temp = newPosition;
        mCbf(EVENT_NEW_POS, mUserData, &temp);
        newPosition += updatePeriod;
        newPosCount--;
    

    if (mObservedSequence != sequence) 
        mObservedSequence = sequence;
        mCbf(EVENT_NEW_IAUDIOTRACK, mUserData, NULL);
        // for offloaded tracks, just wait for the upper layers to recreate the track
        if (isOffloadedOrDirect()) 
            return NS_INACTIVE;
        
    

    // if inactive, then don't run me again until re-started
    if (!active) 
        return NS_INACTIVE;
    

    //...

    // EVENT_MORE_DATA callback handling.
    // Timing for linear pcm audio data formats can be derived directly from the
    // buffer fill level.
    // Timing for compressed data is not directly available from the buffer fill level,
    // rather indirectly from waiting for blocking mode callbacks or waiting for obtain()
    // to return a certain fill level.

    struct timespec timeout;
    const struct timespec *requested = &ClientProxy::kForever;
    if (ns != NS_WHENEVER) 
        timeout.tv_sec = ns / 1000000000LL;
        timeout.tv_nsec = ns % 1000000000LL;
        ALOGV("timeout %ld.%03d", timeout.tv_sec, (int) timeout.tv_nsec / 1000000);
        requested = &timeout;
    

    //如果还有音频流没有写完
    while (mRemainingFrames > 0) 

        Buffer audioBuffer;
        audioBuffer.frameCount = mRemainingFrames;
        size_t nonContig;
        //向共享内存块申请 一块可用的 buffer.
        status_t err = obtainBuffer(&audioBuffer, requested, NULL, &nonContig);
        LOG_ALWAYS_FATAL_IF((err != NO_ERROR) != (audioBuffer.frameCount == 0),
                "obtainBuffer() err=%d frameCount=%zu", err, audioBuffer.frameCount);
        requested = &ClientProxy::kNonBlocking;
        size_t avail = audioBuffer.frameCount + nonContig;
        ALOGV("obtainBuffer(%u) returned %zu = %zu + %zu err %d",
                mRemainingFrames, avail, audioBuffer.frameCount, nonContig, err);
        if (err != NO_ERROR) 
            if (err == TIMED_OUT || err == WOULD_BLOCK || err == -EINTR ||
                    (isOffloaded() && (err == DEAD_OBJECT))) 
                // FIXME bug 25195759
                return 1000000;
            
            ALOGE("Error %d obtaining an audio buffer, giving up.", err);
            return NS_NEVER;
        

        if (mRetryOnPartialBuffer && audio_is_linear_pcm(mFormat)) 
            mRetryOnPartialBuffer = false;
            if (avail < mRemainingFrames) 
                if (ns > 0)  // account for obtain time
                    const nsecs_t timeNow = systemTime();
                    ns = max((nsecs_t)0, ns - (timeNow - timeAfterCallbacks));
                
                nsecs_t myns = framesToNanoseconds(mRemainingFrames - avail, sampleRate, speed);
                if (ns < 0 /* NS_WHENEVER */ || myns < ns) 
                    ns = myns;
                
                return ns;
            
        

       //回调状态,告诉应用,我还需要更多的buffer.
        size_t reqSize = audioBuffer.size;
        mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer);
        size_t writtenSize = audioBuffer.size;

        // Sanity check on returned size
        if (ssize_t(writtenSize) < 0 || writtenSize > reqSize) 
            ALOGE("EVENT_MORE_DATA requested %zu bytes but callback returned %zd bytes",
                    reqSize, ssize_t(writtenSize));
            return NS_NEVER;
        

            //...省略一大串代码...主要做以下一些事情.
            // The callback is done filling buffers
            // Keep this thread going to handle timed events and
            // still try to get more data in intervals of WAIT_PERIOD_MS
            // but don't just loop and block the CPU, so wait

            // mCbf(EVENT_MORE_DATA, ...) might either
            // (1) Block until it can fill the buffer, returning 0 size on EOS.
            // (2) Block until it can fill the buffer, returning 0 data (silence) on EOS.
            // (3) Return 0 size when no data is available, does not wait for more data.
            //
            // (1) and (2) occurs with AudioPlayer/AwesomePlayer; (3) occurs with NuPlayer.
            // We try to compute the wait time to avoid a tight sleep-wait cycle,
            // especially for case (3).
            //
            // The decision to support (1) and (2) affect the sizing of mRemainingFrames
            // and this loop; whereas for case (3) we could simply check once with the full
            // buffer size and skip the loop entirely.


        //释放buffer,更新读指针
        releaseBuffer(&audioBuffer);

        // FIXME here is where we would repeat EVENT_MORE_DATA again on same advanced buffer
        // if callback doesn't like to accept the full chunk
        if (writtenSize < reqSize) 
            continue;
        

        // There could be enough non-contiguous frames available to satisfy the remaining request
        if (mRemainingFrames <= nonContig) 
            continue;
        

         //......

    return 0;

这段代码非常的长,其实主要就是根据共享内存的读写指针的位置,判断当前处于什么样的状态,其中通过mCbf对象回调的状态值定义在 /frameworks/av/include/media/AudioTrack.h 的枚举event_type中:

    /* Events used by AudioTrack callback function (callback_t).
     * Keep in sync with frameworks/base/media/java/android/media/AudioTrack.java NATIVE_EVENT_*.
     */
    enum event_type 
        EVENT_MORE_DATA = 0,        //需要更多的数据

        EVENT_UNDERRUN = 1,         //处于低负荷状态,目前不能提供更多的数据

        EVENT_LOOP_END = 2,         //循环结束

        EVENT_MARKER = 3,           //播放音频流达到警戒线

        EVENT_NEW_POS = 4,          // Playback head is at a new position
                                    // (See setPositionUpdatePeriod()).
        EVENT_BUFFER_END = 5,       // Playback has completed for a static track.
        EVENT_NEW_IAUDIOTRACK = 6,  // IAudioTrack was re-created, either due to re-routing and
                                    // voluntary invalidation by mediaserver, or mediaserver crash.
        EVENT_STREAM_END = 7,       //播放结束
                                    // back (after stop is called) for an offloaded track.
#if 0   // FIXME not yet implemented
        EVENT_NEW_TIMESTAMP = 8,    // Delivered periodically and when there's a significant change
                                    // in the mapping from frame position to presentation time.
                                    // See AudioTimestamp for the information included with event.
#endif
    ;

终于分析完了,好了,我们再来看看它如何将数据回调给Java空间的。首先mCbf这个对象在AudioTrack.h定义的类型是 callback_t ,我们只要找到 callback_t 的定义就行了。

typedef void (callback_t)(int event, void user, void *info);

可以看出其实 callback_t 就是一个指针函数,它其实在JNI调用AudioTrack对象的set()方法传入的,那么我们再回顾下之前JNI的实现:

    AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage();
    //重点在这里
    lpJniStorage->mCallbackData.audioTrack_class = (jclass)env->NewGlobalRef(clazz);
    // we use a weak reference so the AudioTrack object can be garbage collected.
    lpJniStorage->mCallbackData.audioTrack_ref = env->NewGlobalRef(weak_this);
    lpJniStorage->mCallbackData.busy = false;

    // initialize the native AudioTrack object
    status_t status = NO_ERROR;
    switch (memoryMode) 
    case MODE_STREAM:

        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                0,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SYNC,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;

        //Java 空间对应的全局变量

// field names found in android/media/AudioTrack.java
#define JAVA_POSTEVENT_CALLBACK_NAME                    "postEventFromNative"
#define JAVA_NATIVETRACKINJAVAOBJ_FIELD_NAME            "mNativeTrackInJavaObj"
#define JAVA_JNIDATA_FIELD_NAME                         "mJniData"
#define JAVA_STREAMTYPE_FIELD_NAME                      "mStreamType"

Java对Native层回调的处理在这里:

    //---------------------------------------------------------
    // Java methods called from the native side
    //--------------------
    @SuppressWarnings("unused")
    private static void postEventFromNative(Object audiotrack_ref,
            int what, int arg1, int arg2, Object obj) 
        //logd("Event posted from the native side: event="+ what + " args="+ arg1+" "+arg2);
        AudioTrack track = (AudioTrack)((WeakReference)audiotrack_ref).get();
        if (track == null) 
            return;
        

        if (what == AudioSystem.NATIVE_EVENT_ROUTING_CHANGE) 
            track.broadcastRoutingChange();
            return;
        
        NativePositionEventHandlerDelegate delegate = track.mEventHandlerDelegate;
        if (delegate != null) 
            Handler handler = delegate.getHandler();
            if (handler != null) 
                Message m = handler.obtainMessage(what, arg1, arg2, obj);
                handler.sendMessage(m);
            
        
    

具体做了哪些事情,我们就不继续分析了,不然真的没完没了了 ^v^ ~.

1.2. 创建客户端代理对象

我们继续了解下之前AudioTrack对象初始化所做的第二个很重要的事情,来回顾下代码:

status_t status = createTrack_l();

这个方法是非常重要的方法,它不仅创建了客户端与服务端之间通信的桥梁,也相当于告诉服务端我Client端已经准备差不多了,你Server端也要开始准备准备了, 那么我们继续分析代码,代码中比较重要的地方我会用注释的方式体现。

// must be called with mLock held
status_t AudioTrack::createTrack_l()

    //通过AudioSystem来获取 audioFlinger的代理对象,这样就可以通过它来实现业务需求
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    if (audioFlinger == 0) 
        ALOGE("Could not get audioflinger");
        return NO_INIT;
    

    if (mDeviceCallback != 0 && mOutput != AUDIO_IO_HANDLE_NONE) 
        AudioSystem::removeAudioDeviceCallback(mDeviceCallback, mOutput);
    

    //这是一个很重要的变量,output相当于服务端线程的一个索引,
    //客户端会根据这个来找到相应的音频输出线程
    audio_io_handle_t output;
    audio_stream_type_t streamType = mStreamType;
    audio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;

    //1. 根据JNI传来的各种参数来确定output值,其实这里调用到服务端去了
    status_t status;
    status = AudioSystem::getOutputForAttr(attr, &output,
                                           (audio_session_t)mSessionId, &streamType, mClientUid,
                                           mSampleRate, mFormat, mChannelMask,
                                           mFlags, mSelectedDeviceId, mOffloadInfo);

    //... 获取服务端 输出线程处理此类音频流 各种音频参数是多少...

    // Client decides whether the track is TIMED (see below), but can only express a preference
    // for FAST.  Server will perform additional tests.
    if ((mFlags & AUDIO_OUTPUT_FLAG_FAST) && !((
            // either of these use cases:
            // use case 1: shared buffer
            (mSharedBuffer != 0) ||
            // use case 2: callback transfer mode
            (mTransfer == TRANSFER_CALLBACK) ||
            // use case 3: obtain/release mode
            (mTransfer == TRANSFER_OBTAIN)) &&
            // matching sample rate
            (mSampleRate == mAfSampleRate))) 
        ALOGW("AUDIO_OUTPUT_FLAG_FAST denied by client; transfer %d, track %u Hz, output %u Hz",
                mTransfer, mSampleRate, mAfSampleRate);
        // once denied, do not request again if IAudioTrack is re-created
        mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_FAST);
    

    // The client's AudioTrack buffer is divided into n parts for purpose of wakeup by server, where
    //  n = 1   fast track with single buffering; nBuffering is ignored
    //  n = 2   fast track with double buffering
    //  n = 2   normal track, (including those with sample rate conversion)
    //  n >= 3  very high latency or very small notification interval (unused).
    const uint32_t nBuffering = 2;

    mNotificationFramesAct = mNotificationFramesReq;

    //frameCount的计算 ...

    // trackFlags 的设置 .... 省略

    size_t temp = frameCount;   // temp may be replaced by a revised value of frameCount,
                                // but we will still need the original value also
    int originalSessionId = mSessionId;
    //2. 调用服务端 createTrack,返回 IAudioTrack 接口,下面继续分析这个接口
    sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                      mSampleRate,
                                                      mFormat,
                                                      mChannelMask,
                                                      &temp,
                                                      &trackFlags,
                                                      mSharedBuffer,
                                                      output,
                                                      tid,
                                                      &mSessionId,
                                                      mClientUid,
                                                      &status);
    ALOGE_IF(originalSessionId != AUDIO_SESSION_ALLOCATE && mSessionId != originalSessionId,
            "session ID changed from %d to %d", originalSessionId, mSessionId);

    if (status != NO_ERROR) 
        ALOGE("AudioFlinger could not create track, status: %d", status);
        goto release;
    
    ALOG_ASSERT(track != 0);

    // AudioFlinger now owns the reference to the I/O handle,
    // so we are no longer responsible for releasing it.

    //获取音频流控制块,IMemory是一个对共享内存操作的跨进程操作接口
    sp<IMemory> iMem = track->getCblk();
    if (iMem == 0) 
        ALOGE("Could not get control block");
        return NO_INIT;
    
    void *iMemPointer = iMem->pointer();
    if (iMemPointer == NULL) 
        ALOGE("Could not get control block pointer");
        return NO_INIT;
    
    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if (mAudioTrack != 0) 
        IInterface::asBinder(mAudioTrack)->unlinkToDeath(mDeathNotifier, this);
        mDeathNotifier.clear();
    
    mAudioTrack = track;
    mCblkMemory = iMem;
    IPCThreadState::self()->flushCommands();

    //将首地址强转为 audio_track_cblk_t ,这个对象以后的文章在分析,这里只要知道这个对象是对共     //享内存进行管理的结构体
    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
    mCblk = cblk;
    // note that temp is the (possibly revised) value of frameCount
    if (temp < frameCount || (frameCount == 0 && temp == 0)) 
        // In current design, AudioTrack client checks and ensures frame count validity before
        // passing it to AudioFlinger so AudioFlinger should not return a different value except
        // for fast track as it uses a special method of assigning frame count.
        ALOGW("Requested frameCount %zu but received frameCount %zu", frameCount, temp);
    
    frameCount = temp;

    mAwaitBoost = false;
    if (mFlags & AUDIO_OUTPUT_FLAG_FAST) 
        if (trackFlags & IAudioFlinger::TRACK_FAST) 
            ALOGV("AUDIO_OUTPUT_FLAG_FAST successful; frameCount %zu", frameCount);
            mAwaitBoost = true;
         else 
            ALOGV("AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount %zu", frameCount);
            // once denied, do not request again if IAudioTrack is re-created
            mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_FAST);
        
    
    if (mFlags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) 
        if (trackFlags & IAudioFlinger::TRACK_OFFLOAD) 
            ALOGV("AUDIO_OUTPUT_FLAG_OFFLOAD successful");
         else 
            ALOGW("AUDIO_OUTPUT_FLAG_OFFLOAD denied by server");
            mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD);
            // FIXME This is a warning, not an error, so don't return error status
            //return NO_INIT;
        
    
    if (mFlags & AUDIO_OUTPUT_FLAG_DIRECT) 
        if (trackFlags & IAudioFlinger::TRACK_DIRECT) 
            ALOGV("AUDIO_OUTPUT_FLAG_DIRECT successful");
         else 
            ALOGW("AUDIO_OUTPUT_FLAG_DIRECT denied by server");
            mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_DIRECT);
 

以上是关于Android系统音频模块-Native层初始化工作的主要内容,如果未能解决你的问题,请参考以下文章

Android native音频:录制播放的实现以及低延迟音频方案

Android native音频:录制播放的实现以及低延迟音频方案

Android 9 Audio系统笔记:AudioPolicy&AudioFlinger初始化

工业数字化转型 — 数字化工厂的信息化系统架构

Android -- Audio Native服务之启动流程分析

深入理解Android Handler机制(深入至native层)