redis源码学习传说中,redis使用的是单线程?

Posted 看,未来

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了redis源码学习传说中,redis使用的是单线程?相关的知识,希望对你有一定的参考价值。

redis 的线程模型设置

曾经确实是单线程,至于原因主要还是性能。在 redis6 之后,发现在数据量特别大的时候,网络 I/O 数据的读/写将占用执行期间大部分 CPU 时间,成为 Redis 主要性能瓶颈之一。后来便创建了I/O线程,将不同的客户端的I/O数据的读/写操作分配到不同的I/O线程中进行处理。

可通过 io-threads 配置项设置I/O线程数量。

设计意义:
1、redis瓶颈不在数据处理,在网络I/O
2、单线程降低了数据操作的复杂度
3、多线程可能存在线程切换、资源竟态、死锁等情况


redis 在 networking.c 中定义了如下变量:

pthread_t io_threads[IO_THREADS_MAX_NUM];		//存储所有线程的线程标识符
pthread_mutex_t io_threads_mutex[IO_THREADS_MAX_NUM];		//用于启停 I/O 线程
_Atomic unsigned long io_threads_pending[IO_THREADS_MAX_NUM];		//每个I/O线程待处理的客户端数量
int io_threads_active;  /* Are the threads currently spinning waiting I/O? */
int io_threads_op;      /* IO_THREADS_OP_WRITE or IO_THREADS_OP_READ. */

/* This is the list of clients each thread will serve when threaded I/O is
 * used. We spawn io_threads_num-1 threads, since one is the main thread
 * itself. */
list *io_threads_list[IO_THREADS_MAX_NUM];	//每个线程的客户端队列

看来一会儿还要补一下 redis对客户端相关处理 相关代码,不然不全。

Redis 启动时,会调用 initThreadedIO 函数创建 I/O 线程,默认处于停用状态。


请求解析

redis认为多线程执行I/O读操作对性能影响不大,默认使用单线程执行I/O读操作。
如果要开启多线程读,可以修改配置项:io-threads-do-reads yes

下面函数会将待读客户端分配给各个I/O线程,等待IO线程读取并解析请求数据:

/* When threaded I/O is also enabled for the reading + parsing side, the
 * readable handler will just put normal clients into a queue of clients to
 * process (instead of serving them synchronously). This function runs
 * the queue using the I/O threads, and process them in order to accumulate
 * the reads in the buffers, and also parse the first command available
 * rendering it in the client structures. */
int handleClientsWithPendingReadsUsingThreads(void) 
    if (!io_threads_active || !server.io_threads_do_reads) return 0;
    int processed = listLength(server.clients_pending_read);
    if (processed == 0) return 0;

    if (tio_debug) printf("%d TOTAL READ pending clients\\n", processed);

    /* Distribute the clients across N different lists. */
    //划分之后,每个线程只需要处理自己队列上的客户端,从而将数据分隔,线程之间独立执行,互不影响
    listIter li;
    listNode *ln;
    listRewind(server.clients_pending_read,&li);
    int item_id = 0;
    while((ln = listNext(&li))) 
        client *c = listNodeValue(ln);
        int target_id = item_id % server.io_threads_num;
        listAddNodeTail(io_threads_list[target_id],c);
        item_id++;
    

    /* Give the start condition to the waiting threads, by setting the
     * start condition atomic var. */
    io_threads_op = IO_THREADS_OP_READ;
    for (int j = 1; j < server.io_threads_num; j++) 
        int count = listLength(io_threads_list[j]);
        io_threads_pending[j] = count;
    

    /* Also use the main thread to process a slice of clients. */
    listRewind(io_threads_list[0],&li);
    while((ln = listNext(&li))) 
        client *c = listNodeValue(ln);
        readQueryFromClient(c->conn);
    
    listEmpty(io_threads_list[0]);

    /* Wait for all the other threads to end their work. */
    while(1) 
        unsigned long pending = 0;
        for (int j = 1; j < server.io_threads_num; j++)
            pending += io_threads_pending[j];
        if (pending == 0) break;
    
    if (tio_debug) printf("I/O READ All threads finshed\\n");

    /* Run the list of clients again to process the new buffers. */
    //当所有客户端数据都已经被读取并解析完成,主线程开始遍历所有的客户端,执行命令
    while(listLength(server.clients_pending_read)) 
        ln = listFirst(server.clients_pending_read);
        client *c = listNodeValue(ln);
        c->flags &= ~CLIENT_PENDING_READ;
        listDelNode(server.clients_pending_read,ln);

        if (c->flags & CLIENT_PENDING_COMMAND) 
            c->flags &= ~CLIENT_PENDING_COMMAND;
            if (processCommandAndResetClient(c) == C_ERR) 
                /* If the client is no longer valid, we avoid
                 * processing the client later. So we just go
                 * to the next. */
                continue;
            
        
        processInputBuffer(c);
    
    return processed;


Redis I/O 线程的启停时机

是我们配置了使用IO线程就一定要用吗?怎么选用配套的CPU呢?这些都是问题。

1、需要我们修改配置文件
2、待处理客户端数量大于或等于指定I/O线程数量的2倍

当没有任务的时候,I/O 线程处于忙等状态,为避免这些I/O与主线程争夺 CPU 资源,redis建议运行机器的CPU不少于4核,I/O数小于核数(超频另说)。


redis 命令执行过程

RESP协议

RESP可以序列化以下几种数据类型:整数、错误信息、单行字符串、多行字符串、数组。

了解就好,还是更喜欢 PB。


命令调用

解析完命令请求后,会调用下面的函数处理该命令请求:

/* This function is called every time, in the client structure 'c', there is
 * more query buffer to process, because we read more data from the socket
 * or because a client was blocked and later reactivated, so there could be
 * pending query buffer, already representing a full command, to process. */
void processInputBuffer(client *c) 
    /* Keep processing while there is something in the input buffer */
    while(c->qb_pos < sdslen(c->querybuf)) 
        /* Return if clients are paused. */
        if (!(c->flags & CLIENT_SLAVE) && clientsArePaused()) break;

        /* Immediately abort if the client is in the middle of something. */
        if (c->flags & CLIENT_BLOCKED) break;

        /* Don't process more buffers from clients that have already pending
         * commands to execute in c->argv. */
        if (c->flags & CLIENT_PENDING_COMMAND) break;

        /* Don't process input from the master while there is a busy script
         * condition on the slave. We want just to accumulate the replication
         * stream (instead of replying -BUSY like we do with other clients) and
         * later resume the processing. */
        if (server.lua_timedout && c->flags & CLIENT_MASTER) break;

        /* CLIENT_CLOSE_AFTER_REPLY closes the connection once the reply is
         * written to the client. Make sure to not let the reply grow after
         * this flag has been set (i.e. don't process more commands).
         *
         * The same applies for clients we want to terminate ASAP. */
        if (c->flags & (CLIENT_CLOSE_AFTER_REPLY|CLIENT_CLOSE_ASAP)) break;

        /* Determine request type when unknown. */
        if (!c->reqtype) 
            if (c->querybuf[c->qb_pos] == '*') 
                c->reqtype = PROTO_REQ_MULTIBULK;
             else 
                c->reqtype = PROTO_REQ_INLINE;
            
        

        if (c->reqtype == PROTO_REQ_INLINE) 
            if (processInlineBuffer(c) != C_OK) break;
            /* If the Gopher mode and we got zero or one argument, process
             * the request in Gopher mode. */
            if (server.gopher_enabled &&
                ((c->argc == 1 && ((char*)(c->argv[0]->ptr))[0] == '/') ||
                  c->argc == 0))
            
                processGopherRequest(c);
                resetClient(c);
                c->flags |= CLIENT_CLOSE_AFTER_REPLY;
                break;
            
         else if (c->reqtype == PROTO_REQ_MULTIBULK) 
            if (processMultibulkBuffer(c) != C_OK) break;
         else 
            serverPanic("Unknown request type");
        

        /* Multibulk processing could see a <= 0 length. */
        if (c->argc == 0) 
            resetClient(c);
         else 
            /* If we are in the context of an I/O thread, we can't really
             * execute the command here. All we can do is to flag the client
             * as one that needs to process the command. */
            if (c->flags & CLIENT_PENDING_READ) 
                c->flags |= CLIENT_PENDING_COMMAND;
                break;
            

            /* We are finally ready to execute the command. */
            if (processCommandAndResetClient(c) == C_ERR) 
                /* If the client is no longer valid, we avoid exiting this
                 * loop and trimming the client buffer later. So we return
                 * ASAP in that case. */
                return;
            
        
    

    /* Trim to pos */
    if (c->qb_pos) 
        sdsrange(c->querybuf,c->qb_pos,-1);
        c->qb_pos = 0;
    
    qb_pos 为查询缓冲区最新读取位置,该位置小于查询缓冲区内容长度时,循环继续执行。

processMultibulkBuffer 函数从查询缓冲区的数据中解析请求报文,获取命令及命令参数:


/* Process the query buffer for client 'c', setting up the client argument
 * vector for command execution. Returns C_OK if after running the function
 * the client has a well-formed ready to be processed command, otherwise
 * C_ERR if there is still to read more buffer to get the full command.
 * The function also returns C_ERR when there is a protocol error: in such a
 * case the client structure is setup to reply with the error and close
 * the connection.
 *
 * This function is called if processInputBuffer() detects that the next
 * command is in RESP format, so the first byte in the command is found
 * to be '*'. Otherwise for inline commands processInlineBuffer() is called. */
int processMultibulkBuffer(client *c) 
    char *newline = NULL;
    int ok;
    long long ll;

    if (c->multibulklen == 0) 
        /* The client should have been reset */
        serverAssertWithInfo(c,NULL,c->argc == 0);

        /* Multi bulk length cannot be read without a \\r\\n */
        newline = strchr(c->querybuf+c->qb_pos,'\\r');
        if (newline == NULL) 
            if (sdslen(c->querybuf)-c->qb_pos > PROTO_INLINE_MAX_SIZE) 
                addReplyError(c,"Protocol error: too big mbulk count string");
                setProtocolError("too big mbulk count string",c);
            
            return C_ERR;
        

        /* Buffer should also contain \\n */
        if (newline-(c->querybuf+c->qb_pos) > (ssize_t)(sdslen(c->querybuf)-c->qb_pos-2))
            return C_ERR;

        /* We know for sure there is a whole line since newline != NULL,
         * so go ahead and find out the multi bulk length. */
        serverAssertWithInfo(c,NULL,c->querybuf[c->qb_pos] == '*');
        ok = string2ll(c->querybuf+1+c->qb_pos,newline-(c->querybuf+1+c->qb_pos),&ll);
        if (!ok || ll > 1024*1024) 
            addReplyError(c,"Protocol error: invalid multibulk length");
            setProtocolError("invalid mbulk count",c);
            return C_ERR;
        

        c->qb_pos = (newline-c->querybuf)+2;

        if (ll <= 0) return C_OK;

        c->multibulklen = ll;

        /* Setup argv array on client structure */
        if (c->argv) zfree(c->argv);
        c->argv = zmalloc(sizeof(robj*)*c->multibulklen);
    

    serverAssertWithInfo(c,NULL,c->multibulklen > 0);
    while(c->multibulklen) 
        /* Read bulk length if unknown */
        if (c->bulklen == -1) 
            newline = strchr(c->querybuf+c->qb_pos,'\\r');
            if (newline == NULL) 
                if (sdslen(c->querybuf)-c->qb_pos > PROTO_INLINE_MAX_SIZE) 
                    addReplyError(c,
                        "Protocol error: too big bulk count string");
                    setProtocolError("too big bulk count string",c);
                    return C_ERR;
                
                break;
            

            /* Buffer should also contain \\n */
            if (newline-(c->querybuf+c->qb_pos) > (ssize_t)(sdslen(c->querybuf)-c->qb_pos-2))
                break;

            if (c->querybuf[c->qb_pos] != '$') 
                addReplyErrorFormat(c,
                    "Protocol error: expected '$', got '%c'",
                    c->querybuf[c->qb_pos]);
                setProtocolError("expected $ but got something else",c);
                return C_ERR;
            

            ok = string2ll(c->querybuf+c->qb_pos+1,newline-(c->querybuf+c->qb_pos+1),&ll);
            if (!ok || ll < 0 || ll > server.proto_max_bulk_len) 
                addReplyError(c,"Protocol error: invalid bulk length");
                setProtocolError("invalid bulk length",c);
                return C_ERR;
            

            c->qb_pos = newline-c->querybuf+2;
            if (ll >= PROTO_MBULK_BIG_ARG) 
                /* If we are going to read a large object from network
                 * try to make it likely that it will start at c->querybuf
                 * boundary so that we can optimize object creation
                 * avoiding a large copy of data.
                 *
                 * But only when the data we have not parsed is less than
                 * or equal to ll+2. If the data length is greater than
                 * ll+2, trimming querybuf is just a waste of time, because
                 * at this time the querybuf contains not only our bulk. */
                if (sdslen(c->querybuf)-c->qb_pos <= (size_t)ll+2) 
                    sdsrange(c->querybuf,c->qb_pos,-1);
                    c->qb_pos = 0;
                    /* Hint the sds library about the amount of bytes this string is
                     * going to contain. */
                    c->querybuf = sdsMakeRoomFor(c->querybuf,ll+2);
                
            
            c->bulklen = ll;
        

        /* Read bulk argument */
        if (sdslen(c->querybuf)-c->qb_pos < (size_t)(c->bulklen+2)) 
            /* Not enough data (+2 == trailing \\r\\n) */
            break;
         else 
            /* Optimization: if the buffer contains JUST our bulk element
             * instead of creating a new object by *copying* the sds we
             * just use the current sds string. */
            if (c->qb_pos == 0 &&
                c->bulklen >= PROTO_MBULK_BIG_ARG &&
                sdslen(c->querybuf) == (size_t)(c->bulklen+2))
            
                c->argv[c->argc++] = createObject(OBJ_STRING,c->querybuf);
                sdsIncrLen(c->querybuf,-2); /* remove CRLF */
                /* Assume that if we saw a fat argument we'll see another one
                 * likely... */
                c->querybuf = sdsnewlen(SDS_NOINIT,c->bulklen+2);
                sdsclear(c->querybuf);
             else 
                c->argv[c->argc++] =
                    createStringObject(c->querybuf+c->qb_pos,c->bulklen);
                c->qb_pos += c->bulklen+2;
            
            c->bulklen = -1;
            c->multibulklen--;
        
    

    /* We're done when c->multibulk == 0 */
    if (c->multibulklen == 0) return C_OK;

    /* Still not ready to process the command */
    return C_ERR;

用户请求触达,触发 AE_READABLE 事件,调用 readQueryFromClient 函数处理事件。


返回响应

client中定义了两个回复缓冲区:
一个字符数组,大小为16KB;一个结构体链表:

char buf[PROTO_REPLY_CHUNK_BYTES];
list *reply;  /* List of reply objects to send to the client. */

先尝试写入 client.buf,如果client.buf写不下,则尝试写入client.reply中。


执行命令

上面一波操作之后,命令参数已经存储在 client.argv中。

processCommandAndResetClient 函数调用 processCommand 函数执行命令,并在命令执行后调用 commandProcessed 执行后续逻辑。

以上是关于redis源码学习传说中,redis使用的是单线程?的主要内容,如果未能解决你的问题,请参考以下文章

吃透Redis:网络框架篇-redis框架源码

吃透Redis:网络框架篇-redis框架源码

Redis为啥会那么快?

为什么说Redis是单线程的以及Redis为什么这么快!

为什么Redis是单线程,性能还如此高?

面试官:你确定 Redis 是单线程的进程吗?