Android 进阶——Framework 核心之Binder对象管理者 Service Manager 守护进程及其自身代理对象详解

Posted CrazyMo_

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Android 进阶——Framework 核心之Binder对象管理者 Service Manager 守护进程及其自身代理对象详解相关的知识,希望对你有一定的参考价值。

文章大纲

引言

一、Service Manager 守护进程概述

运行于用户空间的Service Manager守护进程是Binder IPC 的核心部件之一,充当着IPC时上下文管理者角色,在负责管理系统的Binder Service组件的同时,还向Client组件提供获取Binder服务的对应的Binder引用对象(根据注册时的Binder服务名称)。所以Service Manager守护进程也是一个独立的Server 进程,其他进程与Service Manager之间的IPC也是通过Binder 实现的。

Service Manager对应的可执行文件为:/system/bin/servicemanager

Service Manager对应的进程名称:servicemanager

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1y7bX1Eo-1675935584943)(file://D:\\Doc\\TODO\\binder系列\\smgr详解5.assets\\image-20220112154012736.png?msec=1675935569419)]

Service Manager进程在Binder机制的基本执行流程如下图:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JY2OAiV9-1675935584944)(file://D:\\Doc\\TODO\\binder系列\\smgr详解5.assets\\image-20220124110211293.png?msec=1675935569417)]

二、Service Manager(0号引用)的启动

/* the one magic handle */
#define BINDER_SERVICE_MANAGER 0U

#define SVC_MGR_NAME “android.os.IServiceManager”

Service Manager是一个特殊的Service 组件,除了功能的特殊性,还表现在与它对应的Binder 本地对象是一个虚拟的对象(其地址值固定为0)且Binder驱动中引用了它的Binder引用对象的句柄值也是为0,换言之,在Binder驱动、Server进程、Client进程中0号引用固定为Service Manager对应的Binder 本地对象

1、解析servicemanager.rc文件启动

如果看过我前面的系统启动系列文章就知道,Service Manager 是由init进程自动启动的,而init进程是在系统启动时解析init.rc文件,解析init.rc文件时会自动把配置的service启动,只有这样才能确保用户进程最快启动备用。

aosp\\frameworks\\native\\cmds\\servicemanager\\servicemanager.rc

service servicemanager /system/bin/servicemanager
    class core
    user system
    group system readproc
    critical
    onrestart restart healthd
    onrestart restart zygote
    onrestart restart audioserver
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart inputflinger
    onrestart restart drm
    onrestart restart cameraserver
    writepid /dev/cpuset/system-background/tasks

上面init.rc脚本的含义是,以service形式执行程序文件/system/bin/servicemanager

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-d2rrmL12-1675935584945)(file://D:\\Doc\\TODO\\binder系列\\smgr详解5.assets\\image-20220112171929064.png?msec=1675935569420)]

然后创建名称为servicemanager的进程:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qVpRgb3k-1675935584945)(file://D:\\Doc\\TODO\\binder系列\\smgr详解5.assets\\image-20220112170449108.png?msec=1675935569435)]

以系统用户权限运行的,而critical表示Service Manager为系统核心服务不可退出,一旦退出会被系统重启,而如果一个核心服务在4分钟内退出的次数大于4,则系统就会自动重启并进入Recovery模式,后面是onrestart 则表示当Service Manager重启事件发生后,后面的所有的进程也会跟着重启。

2、service_manager.c#main函数完成servicemanager进程的初始化

\\frameworks\\native\\cmds\\servicemanager\\service_manager.c

当上面servicemanager.rc脚本触发后,执行可执行程序文件,本质上就是执行代码里的main函数,主要做了:

int main()

    struct binder_state *bs;
    bs = binder_open(128*1024);
    if (!bs) 
        return -1;
    
    if (binder_become_context_manager(bs)) 
        return -1;
    
    selinux_enabled = is_selinux_enabled();
    sehandle = selinux_android_service_context_handle();
    selinux_status_open(true);
    union selinux_callback cb;
    cb.func_audit = audit_callback;
    selinux_set_callback(SELINUX_CB_AUDIT, cb);
        ...
    binder_loop(bs, svcmgr_handler);
    return 0;

2.1、binder.c#binder_open函数初始化binder设备文件和完成内存映射

\\frameworks\\native\\cmds\\servicemanager\\binder.c

主要是调用系统调用open函数打开/dev/binder文件且mapsize为128*1024个字节(即128KB),当/dev/binder设备文件被open打开时,Binder驱动中的binder_open函数随即被执行并为当前进程创建一个用于描述当前进程Binder IPC状态的binder_proc类型结构体。接着调用mmap函数将/dev/binder文件映射到进程的地址空间(大小为128KB),即请求Binder驱动为进程分配128KB大小的内核缓存区,并把地址空间的起始地址和大小分别保存到binder_state结构体的mapped和mapsize 成员中,并把这个结构体返回给main函数。

struct binder_state *binder_open(size_t mapsize)

    struct binder_state *bs;
    struct binder_version vers;
    bs = malloc(sizeof(*bs));
    if (!bs) 
        errno = ENOMEM;
        return NULL;
    
    bs->fd = open("/dev/binder", O_RDWR | O_CLOEXEC);
    if (bs->fd < 0) 
        goto fail_open;
    

    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) 
        goto fail_open;
    

    bs->mapsize = mapsize;
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) 
        goto fail_map;
    
    return bs;
fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return NULL;

2.2、binder.c#binder_become_context_manager函数执行将Service Manager变成Binder IPC的上下文管理者

Service Manager 守护进程本质上就是一个进程,要称为Binder IPC的上下文管理者,就需要通过IO控制指令BINDER_SET_CONTEXT_MGR将自身注册到Binder驱动程序中。

\\frameworks\\native\\cmds\\servicemanager\\binder.c

int binder_become_context_manager(struct binder_state *bs)

    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);

BINDER_SET_CONTEXT_MGR是一个宏,只有一个整形参数(__s32) 代表一个与Service Manager对应的Binder本地对象的地址值,而这里传入的是0,所以Service Manager 对应的本地对象地址为0,即成为了 0号引用

#define BINDER_SET_CONTEXT_MGR        _IOW('b', 7, __s32)

最终通过系统调用ioctl函数与Binder驱动通信,再由Binder 驱动处理这个IO控制指令BINDER_SET_CONTEXT_MGR,基本上Binder 驱动的通信都是通过这个ioctl函数

static int binder_ioctl_set_ctx_mgr(struct file *filp)

    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    struct binder_context *context = proc->context;
    struct binder_node *new_node;
    mutex_lock(&context->context_mgr_node_lock);
    ...
    ret = security_binder_set_context_mgr(proc->tsk);
    new_node = binder_new_node(proc, NULL);
    binder_node_lock(new_node);
    new_node->local_weak_refs++;
    new_node->local_strong_refs++;
    new_node->has_strong_ref = 1;
    new_node->has_weak_ref = 1;
    context->binder_context_mgr_node = new_node;
    binder_node_unlock(new_node);
    binder_put_node(new_node);
out:
    mutex_unlock(&context->context_mgr_node_lock);
    return ret;

在binder_new_node 函数里创建Binder实体对象的binder_node结构体,创建 work 和 todo 队列。

2.3、配置selinux机制相关

这主要是涉及到Binder IPC 安全机制的。

2.4、binder.c#binder_loop 开启轮询成为Server进程监听并响应Client进程的通信请求

struct binder_state
    int fd;//打开的dev/binder/文件描述符
    void *mapped;//把设备文件映射到进程空间的起始地址
    unsigned mapsize;//内存映射空间的大小

Service Manager 是Android 核心服务,需要一直运行在后台,随时为Server组件和Client组件服务,因此就需要通过一个无线循环监听和响应IPC请求,binder_loop函数被执行后首先是通过内部协议把自己注册为Binder进程(Binder驱动只会把IPC请求分发给Binder进程),然后循环不断使用BINDER_WRITE_READ指令来轮询Binder驱动程序是否有新的IPC请求需要被处理,如果有就交给binder_parse函数处理,反之则当前进程就会在Binder驱动程序中睡眠(睡眠在Binder驱动程序的binder_thread_read函数中)等待,直到有新的IPC请求到来为止。简而言之,首先BC_ENTER_LOOPER 命令,写入状态loop。

/**
* bs : 指向在binder_open后Binder驱动创建的结构体
* func: 指向service_manager#svcmgr_handler 函数,就是在这个函数里去处理IPC请求的
*/
void binder_loop(struct binder_state *bs, binder_handler func)

    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];
    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    //一个进程需要通过内部协议将自己主动注册为Binder线程,Binder驱动才会将IPC 请求分发给它处理
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));//设置线程looper状态
    //
    for (;;) 
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//不断地从Binder驱动读数据,没有数据会进入休眠状态
        if (res < 0) 
            break;
        
        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        if (res == 0) 
            break;
        
        if (res < 0) 
            break;
        
    

ioctl里面方法通过传入IO控制指令BINDER_WRITE_READ进入到方法binder_thread_write

static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)

    uint32_t cmd;
    struct binder_context *context = proc->context;
    ...
    while (ptr < end && thread->return_error.cmd == BR_OK) 
        int ret;
        switch (cmd) 
        case BC_INCREFS:
        case BC_ACQUIRE:
        case BC_RELEASE:
        case BC_DECREFS: 
            bool strong = cmd == BC_ACQUIRE || cmd == BC_RELEASE;
            bool increment = cmd == BC_INCREFS || cmd == BC_ACQUIRE;
            struct binder_ref_data rdata;
            if (increment && !target) 
                struct binder_node *ctx_mgr_node;
                mutex_lock(&context->context_mgr_node_lock);
                ctx_mgr_node = context->binder_context_mgr_node;
                if (ctx_mgr_node)
                    ret = binder_inc_ref_for_node(
                            proc, ctx_mgr_node,
                            strong, NULL, &rdata);
                mutex_unlock(&context->context_mgr_node_lock);
            
        
        case BC_INCREFS_DONE:
        case BC_ACQUIRE_DONE: 
            binder_uintptr_t node_ptr;
            binder_uintptr_t cookie;
            struct binder_node *node;
            ptr += sizeof(binder_uintptr_t);
            if (get_user(cookie, (binder_uintptr_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(binder_uintptr_t);
            node = binder_get_node(proc, node_ptr);
            binder_node_inner_lock(node);
            if (cmd == BC_ACQUIRE_DONE) 
                if (node->pending_strong_ref == 0) 
                    binder_user_error("%d:%d BC_ACQUIRE_DONE node %d has no pending acquire request\\n",
                        proc->pid, thread->pid,
                        node->debug_id);
                    binder_node_inner_unlock(node);
                    binder_put_node(node);
                    break;
                
                node->pending_strong_ref = 0;
             else 
                if (node->pending_weak_ref == 0) 
                    ...
                    break;
                
                node->pending_weak_ref = 0;
            
            free_node = binder_dec_node_nilocked(node,
                    cmd == BC_ACQUIRE_DONE, 0);
            binder_node_inner_unlock(node);
            binder_put_node(node);
            break;
        
        case BC_FREE_BUFFER: 
            binder_uintptr_t data_ptr;
            struct binder_buffer *buffer;

            buffer = binder_alloc_prepare_to_free(&proc->alloc,
                                  data_ptr);
            break;
        

        case BC_TRANSACTION_SG:
        case BC_REPLY_SG: 
            ...
            break;
        
        case BC_TRANSACTION:
        case BC_REPLY: 
            struct binder_transaction_data tr;

            if (copy_from_user(&tr, ptr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);
            binder_transaction(proc, thread, &tr,
                       cmd == BC_REPLY, 0);
            break;
        

        case BC_REGISTER_LOOPER:
            binder_inner_proc_lock(proc);
            if (thread->looper & BINDER_LOOPER_STATE_ENTERED) 
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
                binder_user_error("%d:%d ERROR: BC_REGISTER_LOOPER called after BC_ENTER_LOOPER\\n",
                    proc->pid, thread->pid);
             else if (proc->requested_threads == 0) 
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
                binder_user_error("%d:%d ERROR: BC_REGISTER_LOOPER called without request\\n",
                    proc->pid, thread->pid);
             else 
                proc->requested_threads--;
                proc->requested_threads_started++;
            
            thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
            binder_inner_proc_unlock(proc);
            break;
        case BC_ENTER_LOOPER:
            if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) 
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
                binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPER\\n",
                    proc->pid, thread->pid);
            
            thread->looper |= BINDER_LOOPER_STATE_ENTERED;
            break;
        case BC_EXIT_LOOPER:
            thread->looper |= BINDER_LOOPER_STATE_EXITED;
            break;

        case BC_REQUEST_DEATH_NOTIFICATION:
        case BC_CLEAR_DEATH_NOTIFICATION: 
            uint32_t target;
            binder_uintptr_t cookie;
            struct binder_ref *ref;
            struct binder_ref_death *death = NULL;
            if (cmd == BC_REQUEST_DEATH_NOTIFICATION) 
                binder_stats_created(BINDER_STAT_DEATH);
                ...
             
            binder_node_unlock(ref->node);
            binder_proc_unlock(proc);
         break;
        case BC_DEAD_BINDER_DONE: 
            struct binder_work *w;
            binder_uintptr_t cookie;
            struct binder_ref_death *death = NULL;

         break;
        
        *consumed = ptr - buffer;
    
    return 0;

它的主线程就会进入到Binder驱动程序的binder_thread_read中去等待它的todo队列或者所属进程的todo队列新进来的工作项并处理,Service Manager 启动完成。

三、Service Manager自身的代理对象的获取

前面说过Service 组件在启动时,需要将自己注册到Service Manager中,便于Client通过Service Manager来获取Service 组件的代理对象,但作为特殊的Service组件其代理对象的获取与其他Service组件代理对象的获取略有不同。

1、Service Manager 家族成员

Service Manager 本身就是一个Binder 服务,因此它的家族成员关系图可以得知:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bjXft7h9-1675935584946)(file://D:\\Doc\\TODO\\binder系列\\smgr详解5.assets\\image-20220112195250403.png?msec=1675935569418)]

2、IServiceManager

\\frameworks\\native\\include\\binder\\IServiceManager.h

\\frameworks\\native\\libs\\binder\\IServiceManager.cpp

IServiceManager继承自IInterface ,主要定义了四个成员函数,其中getService和checkService用于获取Service组件的代理对象,addService用于注册Service组件,listService则用于遍历注册了的Service组件。

class IServiceManager : public IInterface

public:
    DECLARE_META_INTERFACE(ServiceManager);
    /**
     * Retrieve an existing service, blocking for a few seconds if it doesn't yet exist.可能会阻塞
     */
    virtual sp<IBinder>         getService( const String16& name) const = 0;

    /**
     * Retrieve an existing service, non-blocking.不阻塞
     */
    virtual sp<IBinder>         checkService( const String16& name) const = 0;

    /**
     * Register a service.
     */
    virtual status_t            addService( const String16& name,
                                            const sp<IBinder>& service,
                                            bool allowIsolated = false) = 0;
    /**
     * Return list of all existing services.
     */
    virtual Vector<String16>    listServices() = 0;

    enum 
        GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,
        CHECK_SERVICE_TRANSACTION,
        ADD_SERVICE_TRANSACTION,
        LIST_SERVICES_TRANSACTION,
    ;
;

sp<IServiceManager> defaultServiceManager();

template<typename INTERFACE>
status_t getService(const String16& name, sp<INTERFACE>* outService)

    const sp<IServiceManager> sm = defaultServiceManager();
    if (sm != NULL) 
        *outService = interface_cast<INTERFACE>(sm->getService(name));
        if ((*outService) != NULL) return NO_ERROR;
    
    return NAME_NOT_FOUND;


bool checkCallingPermission(const String16& permission);
bool checkCallingPermission(const String16& permission,
                           int32_t* outPid, int32_t* outUid);
bool checkPermission(const String16& permission, pid_t pid, uid_t uid);

对于普通的Service组件,Client进程想要获取Service 的代理对象,首先要通过Binder驱动得到一个句柄值,然后才可以根据这个句柄值创建一个Binder 代理对象,最后将这个Binder代理对象封装成为一个实现了特定接口的代理对象,但由于Service Manager的句柄值恒为0(即0号引用),就不需要先于Binder 驱动进行通信了,直接调用其defaultServiceManager函数即可。

3、BpServiceManager

BpServiceManager 继承BnInterface,是进程内唯一一个Service Manager的代理对象。BpServiceManager对象实现了IServiceManager的业务函数,又通过其基类中Bpbinder赋值的mRemote作为通信代理。

class BpServiceManager : public BpInterface<IServiceManager>

public:
    BpServiceManager(const sp<IBinder>& impl)
        : BpInterface<IServiceManager>(impl)
    
    

    virtual sp<IBinder> getService(const String16& name) const
    
        unsigned n;
        for (n = 0; n < 100; n++)
            if (n > 0) 
                ALOGI("Waiting for service %s...", String8(name).string());
                usleep(50000);
            
            sp<IBinder> svc = checkService(name);
            if (svc != NULL) return svc;
        
        return NULL;
    

    virtual sp<IBinder> checkService( const String16& name) const
    
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
    

    virtual status_t addService(const String16& name, const sp<IBinder>& service,
            bool allowIsolated)
    
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        data.writeStrongBinder(service);
        data.writeInt32(allowIsolated ? 1 : 0);
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    

    virtual Vector<String16> listServices()
    
        Vector<String16> res;
        int n = 0;

        for (;;) 
            Parcel data, reply;
            data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
            data.writeInt32(n++);
            status_t err = remote()->transact(LIST_SERVICES_TRANSACTION, data, &reply);
            if (err != NO_ERROR)
                break;
            res.add(reply.readString16());
        
        return res;
    
;

4、IServiceManager#defaultServiceManager函数

// For ProcessState.cpp
extern Mutex gProcessMutex;
extern sp<ProcessState> gProcess;

// For IServiceManager.cpp
extern Mutex gDefaultServiceManagerLock;
extern sp<IServiceManager> gDefaultServiceManager;
extern sp<IPermissionController> gPermissionController;

全局变量gDefaultServiceManager是一个类型为IServiceManager的强指针,指向的是进程内的BpServiceManager对象,加上gDefaultServiceManagerLock互斥锁可以确保进程为Service Manager代理对象唯一,即BpServiceManager对象单例。

sp<IServiceManager> defaultServiceManager()

    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
    
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) 
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
            if (gDefaultServiceManager == NULL)
                sleep(1);
        
    
    return gDefaultServiceManager;

首先检查gDefaultServiceManager是否为NULL,不为NULL则说明Binder库已经为进程创建过一个Service Manager代理对象了直接返回给调用者,反之则创建一个Service Manager代理对象并保存到gDefaultServiceManager后再返回给调用者,接下来我们继续分析创建一个Service Manager代理对象的三步曲:

4.1、ProcessState::self()

全局变量gProcessMutex是一个互斥锁,而gProcess 指向的是进程内的ProcessState对象,也是因为互斥锁的缘故,一个进程内的ProcessState对象是唯一的,也是一个经典的单例设计。首先检查gProcess是否为NULL,不为NULL则说明Binder库已经为进程创建过一个ProcessState对象了直接返回给调用者,反之则创建一个ProcessState对象对象并保存到gProcess 后再返回给调用者。

#define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2))
sp<ProcessState> ProcessState::self()

    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != NULL) 
        return gProcess;
    
    gProcess = new ProcessState;
    return gProcess;


ProcessState::ProcessState()
    : mDriverFD(open_driver())
    , mVMStart(MAP_FAILED)
    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
    , mExecutingThreadsCount(0)
    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
    , mStarvationStartTimeMs(0)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)

    if (mDriverFD >= 0) 
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        if (mVMStart == MAP_FAILED) 
            close(mDriverFD);
            mDriverFD = -1;
        
    

在创建ProcessState时,首先open_driver函数打开Binder设备文件/dev/binder,并把文件描述符(句柄)保存到mDriverFD变量中,

#define DEFAULT_MAX_BINDER_THREADS 15
static int open_driver()

    int fd = open("/dev/binder", O_RDWR | O_CLOEXEC);
    if (fd >= 0) 
        int vers = 0;
        //通过IO指令得到Binder驱动程序的版本
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == -1) 
            close(fd);
            fd = -1;
        
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) 
            close(fd);
            fd = -1;
        
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
        //通过IO指令告知Binder驱动最多可以请求进程创建15个Binder线程来处理IPC请求
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        if (result == -1) 
            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        
     else 
        ALOGW("Opening '/dev/binder' failed: %s\\n", strerror(errno));
    
    return fd;

接着主要是调用mmap函数将Binder设备文件映射到进程的地址空间(即请求Binder驱动为进程分配默认大小为1016KB的内核缓冲区)。

4.2、ProcessState::self()->getContextObject(NULL)

ProcessState::self()函数返回进程的ProcessState对象后继续执行getContextObject函数,

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)

    return getStrongProxyForHandle(0);
           

传入的handle值是0,表示要创建的Binder代理对象为Service Manager代理对象

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)

    sp<IBinder> result;
    AutoMutex _l(mLock);
    //根据传进来的handle值,查找对应的资源项。如果没有找到这个资源项,那么会创建一个资源项并返回,这个资源项需要填充
    handle_entry* e = lookupHandleLocked(handle);
    if (e != NULL) 
        // We need to create a new BpBinder if there isn't currently one, OR we are unable to acquire a weak reference on this current one. 
        IBinder* b = e->binder;//b==NULL则表示进程还没有为句柄值handle创建对应的Binder代理对象
        //attemptIncWeak 尝试增加其弱引用计数,成功则说明还活着,反之则说明可能已经被销毁了,需要重新为handle创建一个Binder代理对象
        if (b == NULL || !e->refs->attemptIncWeak(this)) 
            if (handle == 0) 
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            
         

以上是关于Android 进阶——Framework 核心之Binder对象管理者 Service Manager 守护进程及其自身代理对象详解的主要内容,如果未能解决你的问题,请参考以下文章

Android 进阶——系统启动之Framework 核心ActivitityManagerService服务启动

Android 进阶——Framework核心 之Binder Java成员类详解

Android 进阶——Framework核心 之Binder Java成员类详解

Android 进阶——Framework 核心之Touch事件分发机制详细攻略

Android 进阶——Framework 核心之Touch事件分发机制详细攻略

Android 进阶——Framework 核心之Touch事件分发机制详细攻略