BInder工作流程

Posted 白嫩豆腐

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了BInder工作流程相关的知识,希望对你有一定的参考价值。

前言

关于binder简单使用案之前文章有所介绍,不过这篇文章只是大概介绍一下binder的工作流程

正文

一个binder通讯其实包括四部分内容:

  1. 服务端
  2. 客户端
  3. binder驱动
  4. serviceManger

服务端需要注册,然后等待客户端连接,这里是业务需求,具体实现其实需要binder驱动以及ServiceManger,如果只针对通讯,ServiceManger是非必要的,不过为了服务管理以及分层(内核层不要管理framework层数据)所以产生了Servicemanger。而ServiceManger其实是一个handle为0的特殊服务。有init.c完成启动注册,

  1. serviceManger启动,把自己注册到binder驱动中(handle为0)
  2. 服务端发送handle为0的请求,发送addService指令
  3. servicemanager保存服务handle号以及进程信息
  4. 客户端发送handle为0的请求,发送getservice指令(服务名称),得到目标服务的handle
  5. 服务端通过handle号,得到目标线程,发送通讯协议指令,即可调用特定服务的函数,得到具体相应。

2.1 ServiceManger

ServiceMager是通过配置文件被init.c启动,代码如下:

int main(int argc, char** argv) {
    
    //大概binder
    sp<ProcessState> ps = ProcessState::initWithDriver(driver);

    sp<ServiceManager> manager = new ServiceManager(std::make_unique<Access>());
    if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {
        LOG(ERROR) << "Could not self register servicemanager";
    }

    IPCThreadState::self()->setTheContextObject(manager);
    
    //这里是通过Looper实现了消息的收取,
    sp<Looper> looper = Looper::prepare(false /*allowNonCallbacks*/);
	
	//监听binder驱动文件的的状态,打断阻塞状态。
    BinderCallback::setupTo(looper);
    //这里主要是通过一个fdTimer,作为定时处理,防止超时。
    ClientCallbackCallback::setupTo(looper, manager);

    while(true) {
        looper->pollAll(-1);
    }

    // should not be reached
    return EXIT_FAILURE;
}

我们先介绍一下Looper,因为这个比较简单,这里包含两个文件的监听,进而唤醒线程进行处理。两个类都是两个函数实现了功能,一个是初始化fd的监听,一个是LooperCallback的回调。我们不关注定时相关,只关注binder正常通讯的BinderCallback。代码如下:

class BinderCallback : public LooperCallback {
public:
    static sp<BinderCallback> setupTo(const sp<Looper>& looper) {
        sp<BinderCallback> cb = new BinderCallback;

        int binder_fd = -1;
        IPCThreadState::self()->setupPolling(&binder_fd);
        LOG_ALWAYS_FATAL_IF(binder_fd < 0, "Failed to setupPolling: %d", binder_fd);

        // Flush after setupPolling(), to make sure the binder driver
        // knows about this thread handling commands.
        IPCThreadState::self()->flushCommands();

        int ret = looper->addFd(binder_fd,
                                Looper::POLL_CALLBACK,
                                Looper::EVENT_INPUT,
                                cb,
                                nullptr /*data*/);
        LOG_ALWAYS_FATAL_IF(ret != 1, "Failed to add binder FD to Looper");

        return cb;
    }

    int handleEvent(int /* fd */, int /* events */, void* /* data */) override {
        IPCThreadState::self()->handlePolledCommands();
        return 1;  // Continue receiving callbacks.
    }
};

这里就比较简单了,IPCThreadState::self()这实现了单利,然后获取打开的binder文件的句柄,然后交给looper进行监听。
最终handleEvent会被looper系统调用,最终实现是通过handlePolledCommands处理binder驱动写入的操作代码,
下面我们关注一下binderfd的打开以及消息的处理:

sp<ProcessState> ProcessState::initWithDriver(const char* driver)
{

    gProcess = new ProcessState(driver);
    return gProcess;
}

ProcessState::ProcessState(const char *driver)
    : mDriverName(String8(driver))
    , mDriverFD(open_driver(driver))
    , mVMStart(MAP_FAILED)
    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
    , mExecutingThreadsCount(0)
    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
    , mStarvationStartTimeMs(0)
    , mBinderContextCheckFunc(nullptr)
    , mBinderContextUserData(nullptr)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
    , mCallRestriction(CallRestriction::NONE)
{

    if (mDriverFD >= 0) {
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
}

static int open_driver(const char *driver)
{
    int fd = open(driver, O_RDWR | O_CLOEXEC);
    if (fd >= 0) {
        int vers = 0;
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == -1) {
            ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
          ALOGE("Binder driver protocol(%d) does not match user space protocol(%d)! ioctl() return value: %d",
                vers, BINDER_CURRENT_PROTOCOL_VERSION, result);
            close(fd);
            fd = -1;
        }
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        if (result == -1) {
            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
    } else {
        ALOGW("Opening '%s' failed: %s\\n", driver, strerror(errno));
    }
    return fd;
}

这里代码比较简单,其实本质上就是打开binder文件,然后调用两次ioctl,获取一些参数,以及调用mmap获取内核映射区(这个东西好像没啥卵用,仅仅是内核记录一下binder机制的进程参数。感觉注释是有问题的(也许是我有问题))
基本上就是打开binder文件,获取一些binder参数。
handlePolledCommands才是真正的再binder文件发生状态变化时,被出发的函数,工作内容相对比较简单,基本生就是通过getAndExecuteCommand通过ioctl读取binder被client进程写入的内容,然后解析操作指令,最后触发serviceManger的checkservice或者getservice。
代码如下

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;
    result = talkWithDriver();
    
        result = executeCommand(cmd);

    return result;
}

talkWithDriver其实就是通过ioctl写入内容,或者读取被写入的内容,然后调用executeCommand,处理,具体如下:

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD < 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)

    return err;
}

其实这里就是吧mIn和mOut写入到binder_write_read结构体中,然后交给binder驱动处理,下一步就是处理executeCommand,代码如下:

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch ((uint32_t)cmd) {

    case BR_TRANSACTION_SEC_CTX:
    case BR_TRANSACTION:
        {
            binder_transaction_data_secctx tr_secctx;
            binder_transaction_data& tr = tr_secctx.transaction_data;

            if (cmd == (int) BR_TRANSACTION_SEC_CTX) {
                result = mIn.read(&tr_secctx, sizeof(tr_secctx));
            } else {
                result = mIn.read(&tr, sizeof(tr));
                tr_secctx.secctx = 0;
            }

            ALOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;

            Parcel buffer;
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);

            const void* origServingStackPointer = mServingStackPointer;
            mServingStackPointer = &origServingStackPointer; // anything on the stack

            const pid_t origPid = mCallingPid;
            const char* origSid = mCallingSid;
            const uid_t origUid = mCallingUid;
            const int32_t origStrictModePolicy = mStrictModePolicy;
            const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
            const int32_t origWorkSource = mWorkSource;
            const bool origPropagateWorkSet = mPropagateWorkSource;
            // Calling work source will be set by Parcel#enforceInterface. Parcel#enforceInterface
            // is only guaranteed to be called for AIDL-generated stubs so we reset the work source
            // here to never propagate it.
            clearCallingWorkSource();
            clearPropagateWorkSource();

            mCallingPid = tr.sender_pid;
            mCallingSid = reinterpret_cast<const char*>(tr_secctx.secctx);
            mCallingUid = tr.sender_euid;
            mLastTransactionBinderFlags = tr.flags;

            // ALOGI(">>>> TRANSACT from pid %d sid %s uid %d\\n", mCallingPid,
            //    (mCallingSid ? mCallingSid : "<N/A>"), mCallingUid);

            Parcel reply;
            status_t error;
           
            if (tr.target.ptr) {
                // We only have a weak reference on the target object, so we must first try to
                // safely acquire a strong reference before doing anything else with it.
                if (reinterpret_cast<RefBase::weakref_type*>(
                        tr.target.ptr)->attemptIncStrong(this)) {
                    error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
                            &reply, tr.flags);
                    reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
                } else {
                    error = UNKNOWN_TRANSACTION;
                }

            } else {
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }

            //ALOGI("<<<< TRANSACT from pid %d restore pid %d sid %s uid %d\\n",
            //     mCallingPid, origPid, (origSid ? origSid : "<N/A>"), origUid);

            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                if (error < NO_ERROR) reply.setError(error);
                sendReply(reply, 0);
            } else {
                if (error != OK || reply.dataSize() != 0) {
                    alog << "oneway function results will be dropped but finished with status "
                         << statusToString(error)
                         << " and parcel size " << reply.dataSize() << endl;
                }
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }

            mServingStackPointer = origServingStackPointer;
            mCallingPid = origPid;
            mCallingSid = origSid;
            mCallingUid = origUid;
            mStrictModePolicy = origStrictModePolicy;
            mLastTransactionBinderFlags = origTransactionBinderFlags;
            mWorkSource = origWorkSource;
            mPropagateWorkSource = origPropagateWorkSet;


        }
        break;

    
    return result;
}

这个就比较简单,其实就是读取mIn的数据,然后调用the_context_object的transact,因为这个是serviceManger,所以是通过Main函数直接设置进去的,

    IPCThreadState::self()->setTheContextObject(manager);

也就是调用了serviceManger的transact,这个其实就比较简单了,是通过BBinder的方法实现如下:

status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            err = pingBinder();
            break;
        case EXTENSION_TRANSACTION:
            err = reply->writeStrongBinder(getExtension());
            break;
        case DEBUG_PID_TRANSACTION:
            err = reply->writeInt32(getDebugPid());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    // In case this is being transacted on in the same process.
    if (reply != nullptr) {
        reply->setDataPosition(0);
    }

    return err;
}

豁然开朗 ,这就是我们普通定义的onTransact方法,不过因为ServiceManager 接口是通过aidl声明的,因为这写代码都是系统生成的,不方便阅读,我们这里只大概介绍一下,是再out目录下通过aidl生成的文件,

::android::status_t BnServiceManager::onTransact(uint32_t _aidl_code, const ::android::Parcel& _aidl_data, ::android::Parcel* _aidl_reply, uint32_t _aidl_flags) {
  ::android::status_t _aidl_ret_status = ::android::OK;
  switch (_aidl_code) {
  case ::android::IBinder::FIRST_CALL_TRANSACTION + 0 /* getService */:
  {
    ::std::string in_name;
    ::android::sp<::android::IBinder> _aidl_return;
    if (!(_aidl_data.checkInterface(this))) {
      _aidl_ret_status = ::android::BAD_TYPE;
      break;
    }
    _aidl_ret_status = _aidl_data.readUtf8FromUtf16(&in_name);
    if (((_aidl_ret_status) != (::android::OK))) {
      break;
    }
    ::android::binder::Status _aidl_status(getService(in_name, &_aidl_return));
    _aidl_ret_status = _aidl_status.writeToParcel(_aidl_reply);
    if (((_aidl_ret_status) != (::android::OK))) {
      break;
    }
    if (!_aidl_status.isOk()) {
      break;
    }
    _aidl_ret_status = _aidl_reply->writeStrongBinder(_aidl_return);
    if (((_aidl_ret_status) != (::android::OK))) {
      break;
    }
  }
  break;
  case ::android::IBinder::FIRST_CALL_TRANSACTION + 1 /* checkService */:
  {
    
  break;
  case ::android::IBinder::FIRST_CALL_TRANSACTION + 2 /* addService */:
  {
    
  }
  break;
  
  break;

  return _aidl_ret_status;
}

核心代码就是``::android::binder::Status _aidl_status(getService(in_name, &_aidl_return));,`这里调用我们自动以的ServiceManger的getService方法,核心就是相应client的serviceMager的getservice函数的相应,

Status ServiceManager::getService(const std::string& name, sp<IBinder>* outBinder) {
    *outBinder = tryGetService(name, true);
    // returns ok regardless of result for legacy reasons
    return Status::ok();
}

sp<IBinder> ServiceManager::tryGetService(const std::以上是关于BInder工作流程的主要内容,如果未能解决你的问题,请参考以下文章

Binder的Native实现libbinder

BInder工作流程

BInder工作流程

Binder基本使用

结合Binder机制看ActivityManager

结合Binder机制看ActivityManager