Android 9 Audio系统笔记:AudioRecord

Posted Mr.Biandan

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Android 9 Audio系统笔记:AudioRecord相关的知识,希望对你有一定的参考价值。

AudioRecord

前言

还是绕不开录音部分,但是毫无头绪,硬着头皮先把AudioRecord熟悉一遍先。牵涉到audioflinger这块都是弯弯绕绕的,Are you ready ?

AudioTrack

AudioTrack,录音部分的核心,往简单的说无非以下几部分:
1、AudioRecord的创建:创建相应的线程
2、AudioRecord音频路由建立
3、开始读数据

第一部分:AudioRecord创建

从frameworks/base/media/java/android/media/AudioRecord.java开始
new AudioRecord();

frameworks/base/media/java/android/media/AudioRecord.java
public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
        int sessionId) throws IllegalArgumentException 
	1.标记mRecordingState为stoped状态;
    2.获取一个MainLooper;
    3.判断录音源是否是REMOTE_SUBMIX,有兴趣的童鞋可以深入研究;
    4.重新获取rate与format参数,这里会根据AUDIO_FORMAT_HAS_PROPERTY_X来判断从哪里获取参数,而在之前的构造函数中,设置参数的时候已经标记了该标志位,所以这两个参数还是我们设置的;
    5.调用audioParamCheck对参数再一次进行检查合法性;
    6.获取声道数以及声道掩码,单声道掩码为0x10,双声道掩码为0x0c7.调用audioBuffSizeCheck检查最小缓冲区大小是否合法;
    8. 调用native_setup的native函数 ,注意这里传过去的参数包括:指向自己的指针,录制源,rate,声道掩码,format,minBuffSize,session[]9.标记mRecordingState为inited状态;
        注:关于SessionId
             一个Session就是一个会话,每个会话都有一个独一无二的Id来标识。该Id的最终管理在AudioFlinger中。
             一个会话可以被多个AudioTrack对象和MediaPlayer共用。
             共用一个Session的AudioTrack和MediaPlayer共享相同的AudioEffect(音效)

8.1 native_setup

//frameworks/base/core/jni/android_media_AudioRecord.cpp
static jint
android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa, jint sampleRateInHertz, jint channelMask,
                // Java channel masks map directly to the native definition
        jint audioFormat, jint buffSizeInBytes, jintArray jSession)

	8.1.1.判断声道掩码是否合法,然后通过掩码计算出声道数;
    8.1.2.由于最小缓冲区大小是采样帧数量*每个采样帧大小得出,每个采样帧大小为所有声道数所占的字节数,从而求出采样帧数量frameCount;
    8.1.3.进行一系列的JNI处理录音源,以及把AudioRecord.java的指针绑定到lpCallbackData回调数据中,
    这样就能把数据通过回调的方式通知到上层;
    8.1.4.调用AudioRecord的set函数(lpRecorder->set),这里注意下flags,他的类型为audio_input_flags_t,
    定义在system\\core\\include\\system\\audio.h中,
    作为音频输入的标志,这里设置为AUDIO_INPUT_FLAG_NONE
typedef enum 
    AUDIO_INPUT_FLAG_NONE       = 0x0,  // no attributes
    AUDIO_INPUT_FLAG_FAST       = 0x1,  // prefer an input that supports "fast tracks"
    AUDIO_INPUT_FLAG_HW_HOTWORD = 0x2,  // prefer an input that captures from hw hotword source
 audio_input_flags_t;
    8.1.5.把lpRecorder对象以及lpCallbackData回调保存到javaAudioRecordFields的相应字段中

8.1.4 set

//frameworks\\av\\media\\libmedia\\AudioRecord.cpp
//9.0 frameworks/av/media/libaudioclient/AudioRecord.cpp
status_t AudioRecord::set(
        audio_source_t inputSource,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        bool threadCanCallJava,
        audio_session_t sessionId,
        transfer_type transferType,
        audio_input_flags_t flags,
        *uid_t uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        audio_port_handle_t selectedDeviceId,
        audio_microphone_direction_t selectedMicDirection,
        float microphoneFieldDimension*)

	8.1.4.1.在JNI中传递过来的参数:transferType为TRANSFER_DEFAULT,cbf!=null,threadCanCallJava=true,
	所以mTransfer设置为TRANSFER_SYNC,他是决定如何从AudioRecord传输数据方式,后面会用到;
    8.1.4.2.保存相关的参数,如录制源mAttributes.source,采样率mSampleRate,采样精度mFormat,
    声道掩码mChannelMask,声道数mChannelCount,采样帧大小mFrameSize,采样帧数量mReqFrameCount,
    通知帧计数mNotificationFramesReq,mSessionId在这里更新了,
    音频输入标志mFlags还是之前的AUDIO_INPUT_FLAG_NONE
    8.1.4.3.当cbf数据回调函数不为null时,开启一个录音线程AudioRecordThread:
     mAudioRecordThread = new AudioRecordThread(*this)//8.1.4.4.调用openRecord_l(0)创建IAudioRecord对象;
	8.1.4.4.调用createRecord_l(0)创建IAudioRecord对象(9.0)8.1.4.5.如果建立失败,就销毁录音线程AudioRecordThread,否则更新参数;

8.1.4.4 创建IAudioRecord对象 createRecord_l

//frameworks\\av\\media\\libmedia\\AudioRecord.cpp
//frameworks/av/media/libaudioclient/AudioRecord.cpp
status_t AudioRecord::createRecord_l(const Modulo<uint32_t> &epoch, const String16& opPackageName)

	A.1.获取IAudioFlinger对象,其通过binder和AudioFlinger通信,所以也就是相当于直接调用到AudioFlinger服务中了;
    A.2.判断音频输入标志,是否需要清除AUDIO_INPUT_FLAG_FAST标志位,这里不需要,一直是AUDIO_INPUT_FLAG_NONE;
    //8.1.4.4.3.调用Audiosystem::getInputForAttr获取输入流的句柄input(9.0没这个);
    //8.1.4.4.4.调用audioFlinger->openRecord创建IAudioRecord对象;
    A.4.调用 audioFlinger->createRecord(input,output, &status)创建IAudioRecord对象;
    A.5.通过IMemory共享内存(通过A.4步骤的output对象获取到),获取录音数据;
    A.6.更新AudioRecordClientProxy客户端代理的录音数据;

A.4 调用 audioFlinger->createRecord(input,output, &status)创建IAudioRecord对象

//frameworks/av/services/audioflinger/AudioFlinger.cpp
sp<media::IAudioRecord> AudioFlinger::createRecord(const CreateRecordInput& input,
                                                   CreateRecordOutput& output,
                                                   status_t *status)

	B.1.参数有效性检查
	B.2.registerPid(clientPid)
	B.3.调用AudioSystem::getInputForAttr获取输入流的句柄input 
	B.4.创建RecordThread *thread = checkRecordThread_l(output.inputId),获取对应的recordthread 这个在打开设备节点的时候已经创建了;
	B.5. 创建RecordThread::RecordTrack:thread->createRecordTrack_l
	B.6. 判断是否有音效,有则添加: thread->addEffectChain_l(chain);
	B.7. 设置内存共享output.cblk = recordTrack->getCblk(), output.buffers = recordTrack->getBuffers()
	B.8. 返回创建RecordHandle client使用 recordHandle = new RecordHandle(recordTrack)
	

B.3 调用AudioSystem::getInputForAttr获取输入流的句柄input

这里跟5.1就有很大区别,不仅获取相应设备,并且设备打开也是在这里完成。一句话就是,输入音频路由以及设备的管理都是在这里完成。

//frameworks\\av\\media\\libmedia\\AudioSystem.cpp
status_t AudioSystem::getInputForAttr(const audio_attributes_t *attr,
                                audio_io_handle_t *input,
                                audio_session_t session,
                                uint32_t samplingRate,
                                audio_format_t format,
                                audio_channel_mask_t channelMask,
                                audio_input_flags_t flags)

    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getInputForAttr(attr, input, session, samplingRate, format, channelMask, flags);


B.3.1 AudioPolicyService::getInputForAttr

frameworks\\av\\services\\audiopolicy\\AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_session_t session,
                                             uint32_t samplingRate,
                                             audio_format_t format,
                                             audio_channel_mask_t channelMask,
                                             audio_input_flags_t flags)

	C.1.对source为HOTWORD或FM_TUNER的录音源,判断是否具有相应的录音权限(根据应用进程号);
    C.2.继续调用AudioPolicyManager的方法(AudioPolicyManager::getInputForAttr)获取input以及inputType;
    C.3.检查应用是否具有该inputType的录音权限;
    C.4.判断是否需要添加音效(audioPolicyEffects),需要则使用audioPolicyEffects->addInputEffects添加音效;

C.2 AudioPolicyManager::getInputForAttr

//frameworks\\av\\services\\audiopolicy\\AudioPolicyManager.cpp
status_t AudioPolicyManager::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_session_t session,
                                             uint32_t samplingRate,
                                             audio_format_t format,
                                             audio_channel_mask_t channelMask,
                                             audio_input_flags_t flags,
                                             input_type_t *inputType)

 	D.1、device = getDeviceAndMixForInputSource(inputSource, &policyMix);
	D.2、获取inputType的类型  *input = getInputForDevice(device, address, session, uid, inputSource,//8.1.4.4.3.1.2.1.调用getDeviceAndMixForInputSource函数获取policyMix设备以及对应的audio_device_t设备类型(device)
	//8.1.4.4.3.1.2.2.获取inputType的类型
	//8.1.4.4.3.1.2.3.更新channelMask,适配声道到输入源;
    //8.1.4.4.3.1.2.4.调用getInputProfile,根据传进来的采样率/精度/掩码等参数与获得的设备支持的Input Profile比较,
    返回一个与设备Profile匹配的IOProfile对象,IOProfile是用来描述输出或输入流的能力,
    策略管理器使用它来确定输出或输入是否适合于给定的用例, 相应地打开/关闭它,以及连接/断开音频轨道;
   // 8.1.4.4.3.1.2.5.如果获取失败的话,则使用AUDIO_INPUT_FLAG_NONE再次获取一遍,如果依然失败,则return一个bad news;
    //8.1.4.4.3.1.2.6.继续调用mpClientInterface->openInput建立起输入流;
    //8.1.4.4.3.1.2.7.根据IOProfile对象构造AudioInputDescriptor,并绑定到input流中,最后更新AudioPortList;

D.1 调用getDeviceAndMixForInputSource

首先看下AudioPolicyManager.cpp::getInputForAttr()的第1步.获取policyMix设备以及对应的audio_device_t设备类型(device)

audio_devices_t AudioPolicyManager::getDeviceAndMixForInputSource(audio_source_t inputSource,
                                                            AudioMix **policyMix)

   这里就是通过InputSource去获取相应的policyMix与audio_device_t设备类型了,
   从这里也可以看出Android系统上对Audio设备的分类有多少种了。

类似AudioTrack的音频策略找到相应的设备类型,这里就是录音的策略了。

D.2 获取inputType的类型 *input = getInputForDevice


audio_io_handle_t AudioPolicyManager::getInputForDevice(audio_devices_t device,
                                                        String8 address,
                                                        audio_session_t session,
                                                        uid_t uid,
                                                        audio_source_t inputSource,
                                                        const audio_config_base_t *config,
                                                        audio_input_flags_t flags,
                                                        AudioMix *policyMix)

//一堆的参数校验,以及通过不知道是啥的AudioInputSesion搞来搞去
sp<AudioInputDescriptor> inputDesc = new AudioInputDescriptor(profile, mpClientInterface);
//这里才是角色,打开HW的设备节点的
status_t status = inputDesc->open(&lConfig, device, address,
            halInputSource, profileFlags, &input);

B.5 创建RecordThread::RecordTrack:thread->createRecordTrack_l

1、通过此来绑定A.4步骤获取的线程。2、设置共享内存。

//frameworks\\av\\services\\audiopolicy\\AudioPolicyClientImpl.cpp AudioPolicyService::AudioPolicyClient::openInput
//frameworks/av/services/audioflinger/Threads.cpp
sp<AudioFlinger::RecordThread::RecordTrack> AudioFlinger::RecordThread::createRecordTrack_l(
        const sp<AudioFlinger::Client>& client,
        const audio_attributes_t& attr,
        uint32_t *pSampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *pFrameCount,
        audio_session_t sessionId,
        size_t *pNotificationFrameCount,
        uid_t uid,
        audio_input_flags_t *flags,
        pid_t tid,
        status_t *status,
        audio_port_handle_t portId)

	E.1.一般性校验:是否初始化了等等
	E.2.创建RecordTrack  track = new RecordTrack,创建的时候并绑定了此前创建的线程(C.4步骤)了
	E.3.检查RecordTrack是否有效,并添加到 mTracks.add(track)里去(类似palyback thread一样,丢到一个线程管理recordtrack);
	E.4.通知设备发生变化 sendPrioConfigEvent_l(callingPid, tid, kPriorityAudioApp, true /*forApp*/);

E.2 创建RecordTrack track = new RecordTrack

RecordTrack包含很多东西,包括内存共享。

//frameworks/av/services/audioflinger/Tracks.cpp
AudioFlinger::RecordThread::RecordTrack::RecordTrack(
            RecordThread *thread,
            const sp<Client>& client,
            const audio_attributes_t& attr,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            void *buffer,
            size_t bufferSize,
            audio_session_t sessionId,
            pid_t creatorPid,
            uid_t uid,
            audio_input_flags_t flags,
            track_type type,
            const String16& opPackageName,
            audio_port_handle_t portId)
    :   TrackBase(thread, client, attr, sampleRate, format,
                  channelMask, frameCount, buffer, bufferSize, sessionId,
                  creatorPid, uid, false /*isOut*/,
                  (type == TYPE_DEFAULT) ?
                          ((flags & AUDIO_INPUT_FLAG_FAST) ? ALLOC_PIPE : ALLOC_CBLK) :
                          ((buffer == NULL) ? ALLOC_LOCAL : ALLOC_NONE),
                  type, portId,
                  std::string(AMEDIAMETRICS_KEY_PREFIX_AUDIO_RECORD) + std::to_string(portId)),
        mOverflow(false),
        mFramesToDrop(0),
        mResamplerBufferProvider(NULL), // initialize in case of early constructor exit
        mRecordBufferConverter(NULL),
        mFlags(flags),
        mSilenced(false),
        mOpRecordAudioMonitor(OpRecordAudioMonitor::createIfNeeded(uid, attr, opPackageName))

	1、mServerProxy = new AudioRecordServerProxy(mCblk, mBuffer, frameCount,
            mFrameSize, !isExternalTrack());

	2. mResamplerBufferProvider = new ResamplerBufferProvider(this);


E.4. 通知设备发生变化 sendPrioConfigEvent_l

//frameworks/av/services/audioflinger/Threads.cpp
// sendPrioConfigEvent_l() must be called with ThreadBase::mLock held
void AudioFlinger::ThreadBase::sendPrioConfigEvent_l(
        pid_t pid, pid_t tid, int32_t prio, bool forApp)

    sp<ConfigEvent> configEvent = (ConfigEvent *)new PrioConfigEvent(pid, tid, prio, forApp);
    sendConfigEvent_l(configEvent);


status_t AudioFlinger::ThreadBase::sendConfigEvent_l(sp<ConfigEvent>& event)

    status_t status = NO_ERROR;

    if (event->mRequiresSystemReady && !mSystemReady) 
        event->mWaitStatus = false;
        mPendingConfigEvents.add(event);
        return status;
    
    mConfigEvents.add(event);
    ALOGV("sendConfigEvent_l() num events %zu event %d", mConfigEvents.size(), event->mType);
    mWaitWorkCV.signal();
    mLock.unlock();
    
        Mutex::Autolock _l(event->mLock);
        while (event->mWaitStatus) 
            if (event->mCond.waitRelative(event->mLock, kConfigEventTimeoutNs) != NO_ERROR) 
                event->mStatus = TIMED_OUT;
                event->mWaitStatus = false;
            
        
        status = event->mStatus;
    
    mLock.lock();
    return status;


处理PrioConfigEvent

frameworks/av/services/audioflinger/Threads.cpp
void AudioFlinger::ThreadBase::processConfigEvents_l()

 case CFG_EVENT_PRIO: 
            PrioConfigEventData *data = (PrioConfigEventData *)event->mData.get();
            // FIXME Need to understand why this has to be done asynchronously
            int err = requestPriority(data->mPid, data->mTid, data->mPrio, data->mForApp,
                    true /*asynchronous*/);
            if (err != 0) 
                ALOGW("Policy SCHED_FIFO priority %d is unavailable for pid %d tid %d; error %d",
                      data->mPrio, data->mPid, data->mTid, err);
            
         break;


第二部分 AudioRecord音频路由建立

由startRecoding开始

frameworks/base/media/java/android/media/AudioRecord.java
public void startRecording()
throws IllegalStateException 
    if (mState != STATE_INITIALIZED) 
        throw new IllegalStateException("startRecording() called on an "
                + "uninitialized AudioRecord.");
    
     // start recording
    synchronized(mRecordingStateLock) 
        if (native_start(MediaSyncEvent.SYNC_EVENT_NONE, 0) == SUCCESS) 
            handleFullVolumeRec(true);
            mRecordingState = RECORDSTATE_RECORDING;
        
    


frameworks/base/core/jni/android_media_AudioRecord.cpp
android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)

    sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
    if (lpRecorder == NULL ) 
        jniThrowException(env, "java/lang/IllegalStateException", NULL);
        return (jint) AUDIO_JAVA_ERROR;
    
 
    return nativeToJavaStatus(
            lpRecorder->start((AudioSystem::sync_event_t)event, triggerSession));


AudioRecord::start

//frameworks/av/media/libaudioclient/AudioRecord.cpp
status_t AudioRecord::start(AudioSystem::sync_event_t event, audio_session_t triggerSession)

	1.重置当前录音Buffer中的录音数据写入的起始位置,录音Buffer的组成在第一篇文章中已经介绍了;
	2.标记mRefreshRemaining为true,从注释中可以看到,他应该是用来强制刷新剩余的frames,后面应该会突出这个变量的作用,先不急;
	3.从mCblk->mFlags的地方获取flags,这里是0x04.第一次来,肯定走mAudioRecord->start()5.如果start失败了,会重新调用restoreRecord_l函数,再次建立输入流通道,这个函数在前一篇文章已经分析过了;
	6.调用AudioRecordThread线程的resume函数;


//frameworks/av/services/audioflinger/Threads.cpp
status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack,
                                           AudioSystem::sync_event_t event,
                                           audio_session_t triggerSession)

	F.1.判断传过来的event的值,从AudioRecord.java可以看到他一直是SYNC_EVENT_NONE,所以这里就清除SyncStartEvent;
    F.2.判断在mActiveTracks集合中传过来的recordTrack是否是第一个,而我们这是第一次来,肯定会是第一个,而如果不是第一个,也就是说之前因为某种状态已经开始了录音,所以再判断

以上是关于Android 9 Audio系统笔记:AudioRecord的主要内容,如果未能解决你的问题,请参考以下文章

Android 9 Audio系统笔记:AudioRecord

Android 9 Audio系统笔记:AudioFlinger音频流处理流程

Android 9 Audio系统笔记:音量调节从CarAudioManager到tinyalsa

Android 9 Audio系统笔记:AudioPolicy&AudioFlinger初始化

Android 9 Audio系统笔记:声音焦点AudioFocusRequest说明

Android 9 Audio系统笔记:声音焦点AudioFocusRequest说明