文章目录

  • 1. Redis 集群
  • 2. 关键数据结构
  • 2.1 集群状态 clusterState
  • 2.2 集群节点 clusterNode
  • 2.3 集群连接 clusterLink
  • 2.4 集群消息 clusterMsg
  • 3. 集群建立源码分析
  • 3.1 集群节点的初始化
  • 3.2 cluster meet 邀请节点加入集群
  • 3.3 被邀请节点的处理
  • 3.4 连接建立后发起邀请节点发送 PING 消息
  • 3.5 连接建立后被邀请节点处理 PING 消息


1. Redis 集群

在 RedisCluster 集群实现原理 中已经介绍过 Redis 3.0 版本以后使用 RedisCluster 作为分布式解决方案,其整个集群网络的建立依赖 Gossip 协议。以下为 Redis 集群建立的示意图,其大致处理流程为以下几个步骤:

  1. 节点A 邀请 节点B 加入集群,节点A 与 节点B 建立连接
  2. 节点A 邀请 节点C 加入集群,节点A 与 节点C 建立连接。二者通信时节点 A 将 节点B 的IP地址、端口号等信息发送给 节点C,节点C 保存并使用该信息去连接 节点B,节点C 与 节点B 建立连接
  3. 节点A 邀请 节点D 加入集群,节点A 与 节点D 建立连接。二者通信时节点 A 将 节点B、节点C 的IP地址、端口号等信息发送给 节点D,节点D 保存并使用该信息去连接 节点B 和 节点C,节点D 与 节点B 和 节点C 建立连接

redis源码解析 pdf redis cluster 源码_分布式

2. 关键数据结构

2.1 集群状态 clusterState

clusterState 的结构定义在cluster.h文件中,集群中每一个节点都存在这样一个结构体,用来描述在本节点视角上整个集群的状态,比较关键的属性如下:

  1. *myselfclusterNode类型,可以看成当前节点自身在集群中的抽象
  2. currentEpoch: 集群当前的时代,该值在发生故障转移时会发生变化
  3. *nodes: dict类型,用于保存集群中其余节点的抽象,key 为节点名称,value 为 clusterNode
  4. *slots[REDIS_CLUSTER_SLOTS]:负责处理各个槽的节点,例如 slots[i] = clusterNode_A 表示槽 i 由节点 A 处理,不清楚的读者可参考RedisCluster 设计成 16384 个 Slot 的原因
  5. failover_auth_time:Slave 节点故障转移发起投票的时间
  6. failover_auth_count: Slave 节点发起投票收到的票数,当该值超过集群中 Master 节点的一半,则 Slave 节点将升级为 Master 节点
  7. failover_auth_rank: 发生故障转移是 Slave 节点的选举优先级,该值根据复制偏移量计算而来,最终用于确定Slave 节点发起投票的时间
typedef struct clusterState {
    clusterNode *myself;  /* This node */
    uint64_t currentEpoch;
    int state;            /* CLUSTER_OK, CLUSTER_FAIL, ... */
    int size;             /* Num of master nodes with at least one slot */
    dict *nodes;          /* Hash table of name -> clusterNode structures */
    dict *nodes_black_list; /* Nodes we don't re-add for a few seconds. */
    clusterNode *migrating_slots_to[CLUSTER_SLOTS];
    clusterNode *importing_slots_from[CLUSTER_SLOTS];
    clusterNode *slots[CLUSTER_SLOTS];
    uint64_t slots_keys_count[CLUSTER_SLOTS];
    rax *slots_to_keys;
    /* The following fields are used to take the slave state on elections. */
    mstime_t failover_auth_time; /* Time of previous or next election. */
    int failover_auth_count;    /* Number of votes received so far. */
    int failover_auth_sent;     /* True if we already asked for votes. */
    int failover_auth_rank;     /* This slave rank for current auth request. */
    uint64_t failover_auth_epoch; /* Epoch of the current election. */
    int cant_failover_reason;   /* Why a slave is currently not able to
                                   failover. See the CANT_FAILOVER_* macros. */
    /* Manual failover state in common. */
    mstime_t mf_end;            /* Manual failover time limit (ms unixtime).
                                   It is zero if there is no MF in progress. */
    /* Manual failover state of master. */
    clusterNode *mf_slave;      /* Slave performing the manual failover. */
    /* Manual failover state of slave. */
    long long mf_master_offset; /* Master offset the slave needs to start MF
                                   or zero if stil not received. */
    int mf_can_start;           /* If non-zero signal that the manual failover
                                   can start requesting masters vote. */
    /* The followign fields are used by masters to take state on elections. */
    uint64_t lastVoteEpoch;     /* Epoch of the last vote granted. */
    int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */
    /* Messages received and sent by type. */
    long long stats_bus_messages_sent[CLUSTERMSG_TYPE_COUNT];
    long long stats_bus_messages_received[CLUSTERMSG_TYPE_COUNT];
    long long stats_pfail_nodes;    /* Number of nodes in PFAIL status,
                                       excluding nodes without address. */
} clusterState;

2.2 集群节点 clusterNode

clusterNode 结构定义在cluster.h文件中,用于描述集群中的一个节点,比较关键的属性如下:

  1. name[CLUSTER_NAMELEN]: 节点的名称
  2. flags: 节点标识, redis 会使用各种不同的标识值记录节点的角色(比如主节点或者从节点),以及节点目前所处的状态(比如在线或者下线)
  3. configEpoch: 节点当前的时代,用于故障转移
  4. slots[CLUSTER_SLOTS/8]: 当前节点负责处理的槽的数组
  5. numslots: 节点负责的槽的总数
  6. ip[REDIS_IP_STR_LEN]: 节点所在的 IP 地址
  7. port: 节点用于正常命令处理的通信端口
  8. cport: 节点用于集群内 Gossip 通信的端口
  9. *link: clusterLink 结构,存储连接该集群节点所需的信息
typedef struct clusterNode {
    mstime_t ctime; /* Node object creation time. */
    char name[CLUSTER_NAMELEN]; /* Node name, hex string, sha1-size */
    int flags;      /* CLUSTER_NODE_... */
    uint64_t configEpoch; /* Last configEpoch observed for this node */
    unsigned char slots[CLUSTER_SLOTS/8]; /* slots handled by this node */
    int numslots;   /* Number of slots handled by this node */
    int numslaves;  /* Number of slave nodes, if this is a master */
    struct clusterNode **slaves; /* pointers to slave nodes */
    struct clusterNode *slaveof; /* pointer to the master node. Note that it
                                    may be NULL even if the node is a slave
                                    if we don't have the master node in our
                                    tables. */
    mstime_t ping_sent;      /* Unix time we sent latest ping */
    mstime_t pong_received;  /* Unix time we received the pong */
    mstime_t data_received;  /* Unix time we received any data */
    mstime_t fail_time;      /* Unix time when FAIL flag was set */
    mstime_t voted_time;     /* Last time we voted for a slave of this master */
    mstime_t repl_offset_time;  /* Unix time we received offset for this node */
    mstime_t orphaned_time;     /* Starting time of orphaned master condition */
    long long repl_offset;      /* Last known repl offset for this node. */
    char ip[NET_IP_STR_LEN];  /* Latest known IP address of this node */
    int port;                   /* Latest known clients port of this node */
    int cport;                  /* Latest known cluster port of this node. */
    clusterLink *link;          /* TCP/IP link with this node */
    list *fail_reports;         /* List of nodes signaling this as failing */
} clusterNode;

2.3 集群连接 clusterLink

clusterLink 结构定义在cluster.h文件中,主要保存与远端节点进行通讯所需的全部信息

typedef struct clusterLink {
    mstime_t ctime;             /* Link creation time */
    connection *conn;           /* Connection to remote node */
    sds sndbuf;                 /* Packet send buffer */
    sds rcvbuf;                 /* Packet reception buffer */
    struct clusterNode *node;   /* Node related to this link if any, or NULL */
} clusterLink;

2.4 集群消息 clusterMsg

clusterMsg 结构定义在cluster.h文件中,主要用于描述集群节点间互相通信的消息的结构。其源码如下,注释比较清晰,不再赘述

union clusterMsgData {
    /* PING, MEET and PONG */
    struct {
        /* Array of N clusterMsgDataGossip structures */
        clusterMsgDataGossip gossip[1];
    } ping;

    /* FAIL */
    struct {
        clusterMsgDataFail about;
    } fail;

    /* PUBLISH */
    struct {
        clusterMsgDataPublish msg;
    } publish;

    /* UPDATE */
    struct {
        clusterMsgDataUpdate nodecfg;
    } update;

    /* MODULE */
    struct {
        clusterMsgModule msg;
    } module;
};
typedef struct {
    char sig[4];        /* Signature "RCmb" (Redis Cluster message bus). */
    uint32_t totlen;    /* Total length of this message */
    uint16_t ver;       /* Protocol version, currently set to 1. */
    uint16_t port;      /* TCP base port number. */
    uint16_t type;      /* Message type */
    uint16_t count;     /* Only used for some kind of messages. */
    uint64_t currentEpoch;  /* The epoch accordingly to the sending node. */
    uint64_t configEpoch;   /* The config epoch if it's a master, or the last
                               epoch advertised by its master if it is a
                               slave. */
    uint64_t offset;    /* Master replication offset if node is a master or
                           processed replication offset if node is a slave. */
    char sender[CLUSTER_NAMELEN]; /* Name of the sender node */
    unsigned char myslots[CLUSTER_SLOTS/8];
    char slaveof[CLUSTER_NAMELEN];
    char myip[NET_IP_STR_LEN];    /* Sender IP, if not all zeroed. */
    char notused1[34];  /* 34 bytes reserved for future usage. */
    uint16_t cport;      /* Sender TCP cluster bus port */
    uint16_t flags;      /* Sender node flags */
    unsigned char state; /* Cluster state from the POV of the sender */
    unsigned char mflags[3]; /* Message flags: CLUSTERMSG_FLAG[012]_... */
    union clusterMsgData data;
} clusterMsg;

3. 集群建立源码分析

redis源码解析 pdf redis cluster 源码_redis 集群_02

3.1 集群节点的初始化

集群节点初始化会在 Redis 服务端启动的时候触发,主要函数为 cluster.c#clusterInit(),源码的关键步骤如下:

  1. 初始化 server.cluster 也就是 clusterState 结构体,包括集群节点字典 server.cluster->nodes
  2. 调用 cluster.c#clusterLoadConfig()函数加载集群配置文件中的配置
  3. 调用 cluster.c#createClusterNode()函数根据配置创建本服务端在集群中的节点抽象,并调用 cluster.c#clusterAddNode()将其添加到集群节点字典 server.cluster->nodes
  4. listenToPort() 函数设置集群消息的监听端口,并将调用 aeCreateFileEvent() 函数将 clusterAcceptHandler() 方法添加为监听该端口 AE_READABLE 事件的处理器
void clusterInit(void) {
    int saveconf = 0;

    server.cluster = zmalloc(sizeof(clusterState));
    server.cluster->myself = NULL;
    server.cluster->currentEpoch = 0;
    server.cluster->state = CLUSTER_FAIL;
    server.cluster->size = 1;
    server.cluster->todo_before_sleep = 0;
    server.cluster->nodes = dictCreate(&clusterNodesDictType,NULL);
    server.cluster->nodes_black_list =
        dictCreate(&clusterNodesBlackListDictType,NULL);
    server.cluster->failover_auth_time = 0;
    server.cluster->failover_auth_count = 0;
    server.cluster->failover_auth_rank = 0;
    server.cluster->failover_auth_epoch = 0;
    server.cluster->cant_failover_reason = CLUSTER_CANT_FAILOVER_NONE;
    server.cluster->lastVoteEpoch = 0;
    for (int i = 0; i < CLUSTERMSG_TYPE_COUNT; i++) {
        server.cluster->stats_bus_messages_sent[i] = 0;
        server.cluster->stats_bus_messages_received[i] = 0;
    }
    server.cluster->stats_pfail_nodes = 0;
    memset(server.cluster->slots,0, sizeof(server.cluster->slots));
    clusterCloseAllSlots();

    /* Lock the cluster config file to make sure every node uses
     * its own nodes.conf. */
    if (clusterLockConfig(server.cluster_configfile) == C_ERR)
        exit(1);

    /* Load or create a new nodes configuration. */
    if (clusterLoadConfig(server.cluster_configfile) == C_ERR) {
        /* No configuration found. We will just use the random name provided
         * by the createClusterNode() function. */
        myself = server.cluster->myself =
            createClusterNode(NULL,CLUSTER_NODE_MYSELF|CLUSTER_NODE_MASTER);
        serverLog(LL_NOTICE,"No cluster configuration found, I'm %.40s",
            myself->name);
        clusterAddNode(myself);
        saveconf = 1;
    }
    if (saveconf) clusterSaveConfigOrDie(1);

    /* We need a listening TCP port for our cluster messaging needs. */
    server.cfd_count = 0;

    /* Port sanity check II
     * The other handshake port check is triggered too late to stop
     * us from trying to use a too-high cluster port number. */
    int port = server.tls_cluster ? server.tls_port : server.port;
    if (port > (65535-CLUSTER_PORT_INCR)) {
        serverLog(LL_WARNING, "Redis port number too high. "
                   "Cluster communication port is 10,000 port "
                   "numbers higher than your Redis port. "
                   "Your Redis port number must be "
                   "lower than 55535.");
        exit(1);
    }
    if (listenToPort(port+CLUSTER_PORT_INCR,
        server.cfd,&server.cfd_count) == C_ERR)
    {
        exit(1);
    } else {
        int j;

        for (j = 0; j < server.cfd_count; j++) {
            if (aeCreateFileEvent(server.el, server.cfd[j], AE_READABLE,
                clusterAcceptHandler, NULL) == AE_ERR)
                    serverPanic("Unrecoverable error creating Redis Cluster "
                                "file event.");
        }
    }

    /* The slots -> keys map is a radix tree. Initialize it here. */
    server.cluster->slots_to_keys = raxNew();
    memset(server.cluster->slots_keys_count,0,
           sizeof(server.cluster->slots_keys_count));

    /* Set myself->port / cport to my listening ports, we'll just need to
     * discover the IP address via MEET messages. */
    myself->port = port;
    myself->cport = port+CLUSTER_PORT_INCR;
    if (server.cluster_announce_port)
        myself->port = server.cluster_announce_port;
    if (server.cluster_announce_bus_port)
        myself->cport = server.cluster_announce_bus_port;

    server.cluster->mf_end = 0;
    resetManualFailover();
    clusterUpdateMyselfFlags();
}

3.2 cluster meet 邀请节点加入集群

  1. Redis 集群建立需要一个节点邀请其他节点加入,这依赖于CLUSTER MEET ip port 命令。该命令的处理函数为 cluster.c#clusterCommand()。这个函数是个公共的入口,会处理其他的集群命令,此处省略与本文无关的内容,其处理步骤如下:

解析校验 MEET 命令的参数,校验通过调用 cluster.c#clusterStartHandshake() 去邀请指定的节点加入集群

void clusterCommand(client *c) {
 if (server.cluster_enabled == 0) {
     addReplyError(c,"This instance has cluster support disabled");
     return;
 }

 ......
  
  else if (!strcasecmp(c->argv[1]->ptr,"meet") && (c->argc == 4 || c->argc == 5)) {
     /* CLUSTER MEET <ip> <port> [cport] */
     long long port, cport;

     if (getLongLongFromObject(c->argv[3], &port) != C_OK) {
         addReplyErrorFormat(c,"Invalid TCP base port specified: %s",
                             (char*)c->argv[3]->ptr);
         return;
     }

     if (c->argc == 5) {
         if (getLongLongFromObject(c->argv[4], &cport) != C_OK) {
             addReplyErrorFormat(c,"Invalid TCP bus port specified: %s",
                                 (char*)c->argv[4]->ptr);
             return;
         }
     } else {
         cport = port + CLUSTER_PORT_INCR;
     }

     if (clusterStartHandshake(c->argv[2]->ptr,port,cport) == 0 &&
         errno == EINVAL)
     {
         addReplyErrorFormat(c,"Invalid node address specified: %s:%s",
                         (char*)c->argv[2]->ptr, (char*)c->argv[3]->ptr);
     } else {
         addReply(c,shared.ok);
     }
 } 
 ......
}
  1. cluster.c#clusterStartHandshake() 函数的处理比较简练,重要的处理步骤如下:

根据 cluster meet命令传过来的参数调用 createClusterNode() 函数创建代表远端集群节点的 clusterNode,并将其添加到 server.cluster->nodes字典中。需注意此时并没有执行连接,而是将集群节点的 flags 设置为 CLUSTER_NODE_HANDSHAKE 和 CLUSTER_NODE_MEET,并将 node->link 设置为 NULL, 定时任务触发时才会发起连接

int clusterStartHandshake(char *ip, int port, int cport) {
 clusterNode *n;
 char norm_ip[NET_IP_STR_LEN];
 struct sockaddr_storage sa;

 /* IP sanity check */
 if (inet_pton(AF_INET,ip,
         &(((struct sockaddr_in *)&sa)->sin_addr)))
 {
     sa.ss_family = AF_INET;
 } else if (inet_pton(AF_INET6,ip,
         &(((struct sockaddr_in6 *)&sa)->sin6_addr)))
 {
     sa.ss_family = AF_INET6;
 } else {
     errno = EINVAL;
     return 0;
 }

 /* Port sanity check */
 if (port <= 0 || port > 65535 || cport <= 0 || cport > 65535) {
     errno = EINVAL;
     return 0;
 }

 /* Set norm_ip as the normalized string representation of the node
  * IP address. */
 memset(norm_ip,0,NET_IP_STR_LEN);
 if (sa.ss_family == AF_INET)
     inet_ntop(AF_INET,
         (void*)&(((struct sockaddr_in *)&sa)->sin_addr),
         norm_ip,NET_IP_STR_LEN);
 else
     inet_ntop(AF_INET6,
         (void*)&(((struct sockaddr_in6 *)&sa)->sin6_addr),
         norm_ip,NET_IP_STR_LEN);

 if (clusterHandshakeInProgress(norm_ip,port,cport)) {
     errno = EAGAIN;
     return 0;
 }

 /* Add the node with a random address (NULL as first argument to
  * createClusterNode()). Everything will be fixed during the
  * handshake. */
 n = createClusterNode(NULL,CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_MEET);
 memcpy(n->ip,norm_ip,sizeof(n->ip));
 n->port = port;
 n->cport = cport;
 clusterAddNode(n);
 return 1;
}
  1. Redis 定时任务触发的时候,会调用到 cluster.c#clusterCron() 函数。这个函数比较长,省略与本节无关的部分,可以看到主要处理步骤如下:
  1. 对于 server.cluster->nodes 字典中 link 属性为 NULL 的节点,调用 createClusterLink()函数为其创建 clusterLink 结构体
  2. 调用 connection.h#connConnect() 函数执行连接指定节点,并且将 clusterLinkConnectHandler() 函数作为连接上可写事件 AE_WRITABLE 的处理器
void clusterCron(void) {
 dictIterator *di;
 dictEntry *de;
 int update_state = 0;
 int orphaned_masters; /* How many masters there are without ok slaves. */
 int max_slaves; /* Max number of ok slaves for a single master. */
 int this_slaves; /* Number of ok slaves for our master (if we are slave). */
 mstime_t min_pong = 0, now = mstime();
 clusterNode *min_pong_node = NULL;
 static unsigned long long iteration = 0;
 mstime_t handshake_timeout;

 iteration++; /* Number of times this function was called so far. */

 ......

 /* Check if we have disconnected nodes and re-establish the connection.
  * Also update a few stats while we are here, that can be used to make
  * better decisions in other part of the code. */
 di = dictGetSafeIterator(server.cluster->nodes);
 server.cluster->stats_pfail_nodes = 0;
 while((de = dictNext(di)) != NULL) {
     clusterNode *node = dictGetVal(de);

     /* Not interested in reconnecting the link with myself or nodes
      * for which we have no address. */
     if (node->flags & (CLUSTER_NODE_MYSELF|CLUSTER_NODE_NOADDR)) continue;

     if (node->flags & CLUSTER_NODE_PFAIL)
         server.cluster->stats_pfail_nodes++;

     /* A Node in HANDSHAKE state has a limited lifespan equal to the
      * configured node timeout. */
     if (nodeInHandshake(node) && now - node->ctime > handshake_timeout) {
         clusterDelNode(node);
         continue;
     }

     if (node->link == NULL) {
         clusterLink *link = createClusterLink(node);
         link->conn = server.tls_cluster ? connCreateTLS() : connCreateSocket();
         connSetPrivateData(link->conn, link);
         if (connConnect(link->conn, node->ip, node->cport, NET_FIRST_BIND_ADDR,
                     clusterLinkConnectHandler) == -1) {
             /* We got a synchronous error from connect before
              * clusterSendPing() had a chance to be called.
              * If node->ping_sent is zero, failure detection can't work,
              * so we claim we actually sent a ping now (that will
              * be really sent as soon as the link is obtained). */
             if (node->ping_sent == 0) node->ping_sent = mstime();
             serverLog(LL_DEBUG, "Unable to connect to "
                 "Cluster Node [%s]:%d -> %s", node->ip,
                 node->cport, server.neterr);

             freeClusterLink(link);
             continue;
         }
         node->link = link;
     }
 }
 ......

3.3 被邀请节点的处理

  1. 被邀请的节点收到建立连接的请求,触发 AE_READABLE 事件,从而触发 3.1 集群节点的初始化一节设置的 cluster.c#clusterAcceptHandler() 函数,其主要处理是调用 connection.h#connAccept() 函数建立连接并将 clusterConnAcceptHandler() 函数作为连接建立时的处理器
void clusterAcceptHandler(aeEventLoop *el, int fd, void *privdata, int mask) {
 int cport, cfd;
 int max = MAX_CLUSTER_ACCEPTS_PER_CALL;
 char cip[NET_IP_STR_LEN];
 UNUSED(el);
 UNUSED(mask);
 UNUSED(privdata);

 /* If the server is starting up, don't accept cluster connections:
  * UPDATE messages may interact with the database content. */
 if (server.masterhost == NULL && server.loading) return;

 while(max--) {
     cfd = anetTcpAccept(server.neterr, fd, cip, sizeof(cip), &cport);
     if (cfd == ANET_ERR) {
         if (errno != EWOULDBLOCK)
             serverLog(LL_VERBOSE,
                 "Error accepting cluster node: %s", server.neterr);
         return;
     }

     connection *conn = server.tls_cluster ? connCreateAcceptedTLS(cfd,1) : connCreateAcceptedSocket(cfd);
     connNonBlock(conn);
     connEnableTcpNoDelay(conn);

     /* Use non-blocking I/O for cluster messages. */
     serverLog(LL_VERBOSE,"Accepting cluster node connection from %s:%d", cip, cport);

     /* Accept the connection now.  connAccept() may call our handler directly
      * or schedule it for later depending on connection implementation.
      */
     if (connAccept(conn, clusterConnAcceptHandler) == C_ERR) {
         if (connGetState(conn) == CONN_STATE_ERROR)
             serverLog(LL_VERBOSE,
                     "Error accepting cluster node connection: %s",
                     connGetLastError(conn));
         connClose(conn);
         return;
     }
 }
}
  1. connection.h#clusterConnAcceptHandler() 函数的处理步骤非常清晰:
  1. 调用函数 createClusterLink() 为请求对端创建一个 clusterLink 结构
  2. 调用connSetReadHandler()函数将clusterReadHandler() 函数作为连接上的读处理器,监听 AE_READABLE 事件
static void clusterConnAcceptHandler(connection *conn) {
 clusterLink *link;

 if (connGetState(conn) != CONN_STATE_CONNECTED) {
     serverLog(LL_VERBOSE,
             "Error accepting cluster node connection: %s", connGetLastError(conn));
     connClose(conn);
     return;
 }

 /* Create a link object we use to handle the connection.
  * It gets passed to the readable handler when data is available.
  * Initiallly the link->node pointer is set to NULL as we don't know
  * which node is, but the right node is references once we know the
  * node identity. */
 link = createClusterLink(NULL);
 link->conn = conn;
 connSetPrivateData(conn, link);

 /* Register read handler */
 connSetReadHandler(conn, clusterReadHandler);
}

3.4 连接建立后发起邀请节点发送 PING 消息

  1. 连接建立,发起邀请的节点 AE_WRITABLE 可写事件被触发, 从而调用处理器cluster.c#clusterLinkConnectHandler() 函数,其主要做了两件事:
  1. 调用 connSetReadHandler() 函数在连接上注册 clusterReadHandler()函数为读处理器
  2. 调用 clusterSendPing() 函数给被邀请加入集群的节点发送 MEET 消息
void clusterLinkConnectHandler(connection *conn) {
 clusterLink *link = connGetPrivateData(conn);
 clusterNode *node = link->node;

 /* Check if connection succeeded */
 if (connGetState(conn) != CONN_STATE_CONNECTED) {
     serverLog(LL_VERBOSE, "Connection with Node %.40s at %s:%d failed: %s",
             node->name, node->ip, node->cport,
             connGetLastError(conn));
     freeClusterLink(link);
     return;
 }

 /* Register a read handler from now on */
 connSetReadHandler(conn, clusterReadHandler);

 /* Queue a PING in the new connection ASAP: this is crucial
  * to avoid false positives in failure detection.
  *
  * If the node is flagged as MEET, we send a MEET message instead
  * of a PING one, to force the receiver to add us in its node
  * table. */
 mstime_t old_ping_sent = node->ping_sent;
 clusterSendPing(link, node->flags & CLUSTER_NODE_MEET ?
         CLUSTERMSG_TYPE_MEET : CLUSTERMSG_TYPE_PING);
 if (old_ping_sent) {
     /* If there was an active ping before the link was
      * disconnected, we want to restore the ping time, otherwise
      * replaced by the clusterSendPing() call. */
     node->ping_sent = old_ping_sent;
 }
 /* We can clear the flag after the first packet is sent.
  * If we'll never receive a PONG, we'll never send new packets
  * to this node. Instead after the PONG is received and we
  * are no longer in meet/handshake status, we want to send
  * normal PING packets. */
 node->flags &= ~CLUSTER_NODE_MEET;

 serverLog(LL_DEBUG,"Connecting with Node %.40s at %s:%d",
         node->name, node->ip, node->cport);
}
  1. cluster.c#clusterSendPing() 函数比较长,主要处理步骤如下:
  1. 首先确定需要发送给被邀请节点的消息中包含的其他节点的信息的个数,默认最少包括 3 个其他节点的信息,如果集群节点不足 3 个则以实际的其他节点数量freshnodes为准
  2. while 循环从本节点保存的集群节点字典中随机挑选出节点,调用 clusterSetGossipEntry() 函数将其包装为 clusterMsgDataGossip 结构,加入到消息中。另外如果有被判定为 PFAIL 的节点,也将其加入到消息中
  3. 最后调用 clusterSendMessage() 发送数据给被邀请的节点
void clusterSendPing(clusterLink *link, int type) {
 unsigned char *buf;
 clusterMsg *hdr;
 int gossipcount = 0; /* Number of gossip sections added so far. */
 int wanted; /* Number of gossip sections we want to append if possible. */
 int totlen; /* Total packet length. */
 /* freshnodes is the max number of nodes we can hope to append at all:
  * nodes available minus two (ourself and the node we are sending the
  * message to). However practically there may be less valid nodes since
  * nodes in handshake state, disconnected, are not considered. */
 int freshnodes = dictSize(server.cluster->nodes)-2;

 /* How many gossip sections we want to add? 1/10 of the number of nodes
  * and anyway at least 3. Why 1/10?
  *
  * If we have N masters, with N/10 entries, and we consider that in
  * node_timeout we exchange with each other node at least 4 packets
  * (we ping in the worst case in node_timeout/2 time, and we also
  * receive two pings from the host), we have a total of 8 packets
  * in the node_timeout*2 falure reports validity time. So we have
  * that, for a single PFAIL node, we can expect to receive the following
  * number of failure reports (in the specified window of time):
  *
  * PROB * GOSSIP_ENTRIES_PER_PACKET * TOTAL_PACKETS:
  *
  * PROB = probability of being featured in a single gossip entry,
  *        which is 1 / NUM_OF_NODES.
  * ENTRIES = 10.
  * TOTAL_PACKETS = 2 * 4 * NUM_OF_MASTERS.
  *
  * If we assume we have just masters (so num of nodes and num of masters
  * is the same), with 1/10 we always get over the majority, and specifically
  * 80% of the number of nodes, to account for many masters failing at the
  * same time.
  *
  * Since we have non-voting slaves that lower the probability of an entry
  * to feature our node, we set the number of entries per packet as
  * 10% of the total nodes we have. */
 wanted = floor(dictSize(server.cluster->nodes)/10);
 if (wanted < 3) wanted = 3;
 if (wanted > freshnodes) wanted = freshnodes;

 /* Include all the nodes in PFAIL state, so that failure reports are
  * faster to propagate to go from PFAIL to FAIL state. */
 int pfail_wanted = server.cluster->stats_pfail_nodes;

 /* Compute the maxium totlen to allocate our buffer. We'll fix the totlen
  * later according to the number of gossip sections we really were able
  * to put inside the packet. */
 totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
 totlen += (sizeof(clusterMsgDataGossip)*(wanted+pfail_wanted));
 /* Note: clusterBuildMessageHdr() expects the buffer to be always at least
  * sizeof(clusterMsg) or more. */
 if (totlen < (int)sizeof(clusterMsg)) totlen = sizeof(clusterMsg);
 buf = zcalloc(totlen);
 hdr = (clusterMsg*) buf;

 /* Populate the header. */
 if (link->node && type == CLUSTERMSG_TYPE_PING)
     link->node->ping_sent = mstime();
 clusterBuildMessageHdr(hdr,type);

 /* Populate the gossip fields */
 int maxiterations = wanted*3;
 while(freshnodes > 0 && gossipcount < wanted && maxiterations--) {
     dictEntry *de = dictGetRandomKey(server.cluster->nodes);
     clusterNode *this = dictGetVal(de);

     /* Don't include this node: the whole packet header is about us
      * already, so we just gossip about other nodes. */
     if (this == myself) continue;

     /* PFAIL nodes will be added later. */
     if (this->flags & CLUSTER_NODE_PFAIL) continue;

     /* In the gossip section don't include:
      * 1) Nodes in HANDSHAKE state.
      * 3) Nodes with the NOADDR flag set.
      * 4) Disconnected nodes if they don't have configured slots.
      */
     if (this->flags & (CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_NOADDR) ||
         (this->link == NULL && this->numslots == 0))
     {
         freshnodes--; /* Tecnically not correct, but saves CPU. */
         continue;
     }

     /* Do not add a node we already have. */
     if (clusterNodeIsInGossipSection(hdr,gossipcount,this)) continue;

     /* Add it */
     clusterSetGossipEntry(hdr,gossipcount,this);
     freshnodes--;
     gossipcount++;
 }

 /* If there are PFAIL nodes, add them at the end. */
 if (pfail_wanted) {
     dictIterator *di;
     dictEntry *de;

     di = dictGetSafeIterator(server.cluster->nodes);
     while((de = dictNext(di)) != NULL && pfail_wanted > 0) {
         clusterNode *node = dictGetVal(de);
         if (node->flags & CLUSTER_NODE_HANDSHAKE) continue;
         if (node->flags & CLUSTER_NODE_NOADDR) continue;
         if (!(node->flags & CLUSTER_NODE_PFAIL)) continue;
         clusterSetGossipEntry(hdr,gossipcount,node);
         freshnodes--;
         gossipcount++;
         /* We take the count of the slots we allocated, since the
          * PFAIL stats may not match perfectly with the current number
          * of PFAIL nodes. */
         pfail_wanted--;
     }
     dictReleaseIterator(di);
 }

 /* Ready to send... fix the totlen fiend and queue the message in the
  * output buffer. */
 totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
 totlen += (sizeof(clusterMsgDataGossip)*gossipcount);
 hdr->count = htons(gossipcount);
 hdr->totlen = htonl(totlen);
 clusterSendMessage(link,buf,totlen);
 zfree(buf);
}
  1. cluster.c#clusterSendMessage() 函数为连接设置了写处理器 clusterWriteHandler() 并将需要发送的数据保存下来,等待可写事件触发时发送到对端
void clusterSendMessage(clusterLink *link, unsigned char *msg, size_t msglen) {
 if (sdslen(link->sndbuf) == 0 && msglen != 0)
     connSetWriteHandlerWithBarrier(link->conn, clusterWriteHandler, 1);

 link->sndbuf = sdscatlen(link->sndbuf, msg, msglen);

 /* Populate sent messages stats. */
 clusterMsg *hdr = (clusterMsg*) msg;
 uint16_t type = ntohs(hdr->type);
 if (type < CLUSTERMSG_TYPE_COUNT)
     server.cluster->stats_bus_messages_sent[type]++;
}
  1. cluster.c#clusterWriteHandler() 函数如下,比较简单,就是写数据的逻辑,不再赘述
void clusterWriteHandler(connection *conn) {
 clusterLink *link = connGetPrivateData(conn);
 ssize_t nwritten;

 nwritten = connWrite(conn, link->sndbuf, sdslen(link->sndbuf));
 if (nwritten <= 0) {
     serverLog(LL_DEBUG,"I/O error writing to node link: %s",
         (nwritten == -1) ? connGetLastError(conn) : "short write");
     handleLinkIOError(link);
     return;
 }
 sdsrange(link->sndbuf,nwritten,-1);
 if (sdslen(link->sndbuf) == 0)
     connSetWriteHandler(link->conn, NULL);
}

3.5 连接建立后被邀请节点处理 PING 消息

  1. 流程至此,被邀请的节点 AE_READABLE 事件触发,调用clusterReadHandler() 函数,可以看到其主要的处理如下:
  1. 在 while 死循环中一直读取对端发送过来的数据,直到数据接收结束
  2. 调用 clusterProcessPacket() 函数处理对象发送过来的数据
void clusterReadHandler(connection *conn) {
 clusterMsg buf[1];
 ssize_t nread;
 clusterMsg *hdr;
 clusterLink *link = connGetPrivateData(conn);
 unsigned int readlen, rcvbuflen;

 while(1) { /* Read as long as there is data to read. */
     rcvbuflen = sdslen(link->rcvbuf);
     if (rcvbuflen < 8) {
         /* First, obtain the first 8 bytes to get the full message
          * length. */
         readlen = 8 - rcvbuflen;
     } else {
         /* Finally read the full message. */
         hdr = (clusterMsg*) link->rcvbuf;
         if (rcvbuflen == 8) {
             /* Perform some sanity check on the message signature
              * and length. */
             if (memcmp(hdr->sig,"RCmb",4) != 0 ||
                 ntohl(hdr->totlen) < CLUSTERMSG_MIN_LEN)
             {
                 serverLog(LL_WARNING,
                     "Bad message length or signature received "
                     "from Cluster bus.");
                 handleLinkIOError(link);
                 return;
             }
         }
         readlen = ntohl(hdr->totlen) - rcvbuflen;
         if (readlen > sizeof(buf)) readlen = sizeof(buf);
     }

     nread = connRead(conn,buf,readlen);
     if (nread == -1 && (connGetState(conn) == CONN_STATE_CONNECTED)) return; /* No more data ready. */

     if (nread <= 0) {
         /* I/O error... */
         serverLog(LL_DEBUG,"I/O error reading from node link: %s",
             (nread == 0) ? "connection closed" : connGetLastError(conn));
         handleLinkIOError(link);
         return;
     } else {
         /* Read data and recast the pointer to the new buffer. */
         link->rcvbuf = sdscatlen(link->rcvbuf,buf,nread);
         hdr = (clusterMsg*) link->rcvbuf;
         rcvbuflen += nread;
     }

     /* Total length obtained? Process this packet. */
     if (rcvbuflen >= 8 && rcvbuflen == ntohl(hdr->totlen)) {
         if (clusterProcessPacket(link)) {
             sdsfree(link->rcvbuf);
             link->rcvbuf = sdsempty();
         } else {
             return; /* Link no longer valid. */
         }
     }
 }
}
  1. cluster.c#clusterProcessPacket() 函数负责处理集群通信的所有消息,比较驳杂,此处省略与本文无关的部分,可以看到处理如下:
  1. 对于 MEET 消息,此时发起邀请的节点还没有加入到被邀请节点的集群节点字典中,故为其创建 createClusterNode,并调用 clusterAddNode() 函数将其加入到字典中
  2. 调用 clusterProcessGossipSection() 函数处理消息中携带的其他集群节点的信息
int clusterProcessPacket(clusterLink *link) {
 clusterMsg *hdr = (clusterMsg*) link->rcvbuf;
 uint32_t totlen = ntohl(hdr->totlen);
 uint16_t type = ntohs(hdr->type);
 mstime_t now = mstime();

  ......

 if (type == CLUSTERMSG_TYPE_PING || type == CLUSTERMSG_TYPE_PONG ||
     type == CLUSTERMSG_TYPE_MEET)
 {
     uint16_t count = ntohs(hdr->count);
     uint32_t explen; /* expected length of this packet */

     explen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
     explen += (sizeof(clusterMsgDataGossip)*count);
     if (totlen != explen) return 1;
 } 

 .....
 /* Check if the sender is a known node. Note that for incoming connections
  * we don't store link->node information, but resolve the node by the
  * ID in the header each time in the current implementation. */
 sender = clusterLookupNode(hdr->sender);

 /* Update the last time we saw any data from this node. We
  * use this in order to avoid detecting a timeout from a node that
  * is just sending a lot of data in the cluster bus, for instance
  * because of Pub/Sub. */
 if (sender) sender->data_received = now;

 ......

 /* Initial processing of PING and MEET requests replying with a PONG. */
 if (type == CLUSTERMSG_TYPE_PING || type == CLUSTERMSG_TYPE_MEET) {
     serverLog(LL_DEBUG,"Ping packet received: %p", (void*)link->node);

     /* We use incoming MEET messages in order to set the address
      * for 'myself', since only other cluster nodes will send us
      * MEET messages on handshakes, when the cluster joins, or
      * later if we changed address, and those nodes will use our
      * official address to connect to us. So by obtaining this address
      * from the socket is a simple way to discover / update our own
      * address in the cluster without it being hardcoded in the config.
      *
      * However if we don't have an address at all, we update the address
      * even with a normal PING packet. If it's wrong it will be fixed
      * by MEET later. */
     if ((type == CLUSTERMSG_TYPE_MEET || myself->ip[0] == '\0') &&
         server.cluster_announce_ip == NULL)
     {
         char ip[NET_IP_STR_LEN];

         if (connSockName(link->conn,ip,sizeof(ip),NULL) != -1 &&
             strcmp(ip,myself->ip))
         {
             memcpy(myself->ip,ip,NET_IP_STR_LEN);
             serverLog(LL_WARNING,"IP address for this node updated to %s",
                 myself->ip);
             clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);
         }
     }

     /* Add this node if it is new for us and the msg type is MEET.
      * In this stage we don't try to add the node with the right
      * flags, slaveof pointer, and so forth, as this details will be
      * resolved when we'll receive PONGs from the node. */
     if (!sender && type == CLUSTERMSG_TYPE_MEET) {
         clusterNode *node;

         node = createClusterNode(NULL,CLUSTER_NODE_HANDSHAKE);
         nodeIp2String(node->ip,link,hdr->myip);
         node->port = ntohs(hdr->port);
         node->cport = ntohs(hdr->cport);
         clusterAddNode(node);
         clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);
     }

     /* If this is a MEET packet from an unknown node, we still process
      * the gossip section here since we have to trust the sender because
      * of the message type. */
     if (!sender && type == CLUSTERMSG_TYPE_MEET)
         clusterProcessGossipSection(hdr,link);

     /* Anyway reply with a PONG */
     clusterSendPing(link,CLUSTERMSG_TYPE_PONG);
 }

 ......
 
 return 1;
}
  1. cluster.c#clusterProcessGossipSection() 函数的处理比较简练,步骤如下:
  1. 首先解析消息中携带的 clusterMsgDataGossip 信息,判断该节点是否已经在本节点的集群节点字典中。如果在的话,根据相关信息做对应的处理
  2. 如果不在,则将 clusterMsgDataGossip 中包含的节点信息还原为 createClusterNode,并调用 clusterAddNode() 函数将其添加到集群节点字典。注意,此时新添加到字典中的节点的 node->link 为 NULL,则定时任务触发时会对其发起连接,也就是循环 3.2 cluster meet 邀请节点加入集群 节第二个步骤
void clusterProcessGossipSection(clusterMsg *hdr, clusterLink *link) {
 uint16_t count = ntohs(hdr->count);
 clusterMsgDataGossip *g = (clusterMsgDataGossip*) hdr->data.ping.gossip;
 clusterNode *sender = link->node ? link->node : clusterLookupNode(hdr->sender);

 while(count--) {
     uint16_t flags = ntohs(g->flags);
     clusterNode *node;
     sds ci;

     if (server.verbosity == LL_DEBUG) {
         ci = representClusterNodeFlags(sdsempty(), flags);
         serverLog(LL_DEBUG,"GOSSIP %.40s %s:%d@%d %s",
             g->nodename,
             g->ip,
             ntohs(g->port),
             ntohs(g->cport),
             ci);
         sdsfree(ci);
     }

     /* Update our state accordingly to the gossip sections */
     node = clusterLookupNode(g->nodename);
     if (node) {
         /* We already know this node.
            Handle failure reports, only when the sender is a master. */
         if (sender && nodeIsMaster(sender) && node != myself) {
             if (flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) {
                 if (clusterNodeAddFailureReport(node,sender)) {
                     serverLog(LL_VERBOSE,
                         "Node %.40s reported node %.40s as not reachable.",
                         sender->name, node->name);
                 }
                 markNodeAsFailingIfNeeded(node);
             } else {
                 if (clusterNodeDelFailureReport(node,sender)) {
                     serverLog(LL_VERBOSE,
                         "Node %.40s reported node %.40s is back online.",
                         sender->name, node->name);
                 }
             }
         }

         /* If from our POV the node is up (no failure flags are set),
          * we have no pending ping for the node, nor we have failure
          * reports for this node, update the last pong time with the
          * one we see from the other nodes. */
         if (!(flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) &&
             node->ping_sent == 0 &&
             clusterNodeFailureReportsCount(node) == 0)
         {
             mstime_t pongtime = ntohl(g->pong_received);
             pongtime *= 1000; /* Convert back to milliseconds. */

             /* Replace the pong time with the received one only if
              * it's greater than our view but is not in the future
              * (with 500 milliseconds tolerance) from the POV of our
              * clock. */
             if (pongtime <= (server.mstime+500) &&
                 pongtime > node->pong_received)
             {
                 node->pong_received = pongtime;
             }
         }

         /* If we already know this node, but it is not reachable, and
          * we see a different address in the gossip section of a node that
          * can talk with this other node, update the address, disconnect
          * the old link if any, so that we'll attempt to connect with the
          * new address. */
         if (node->flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL) &&
             !(flags & CLUSTER_NODE_NOADDR) &&
             !(flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) &&
             (strcasecmp(node->ip,g->ip) ||
              node->port != ntohs(g->port) ||
              node->cport != ntohs(g->cport)))
         {
             if (node->link) freeClusterLink(node->link);
             memcpy(node->ip,g->ip,NET_IP_STR_LEN);
             node->port = ntohs(g->port);
             node->cport = ntohs(g->cport);
             node->flags &= ~CLUSTER_NODE_NOADDR;
         }
     } else {
         /* If it's not in NOADDR state and we don't have it, we
          * add it to our trusted dict with exact nodeid and flag.
          * Note that we cannot simply start a handshake against
          * this IP/PORT pairs, since IP/PORT can be reused already,
          * otherwise we risk joining another cluster.
          *
          * Note that we require that the sender of this gossip message
          * is a well known node in our cluster, otherwise we risk
          * joining another cluster. */
         if (sender &&
             !(flags & CLUSTER_NODE_NOADDR) &&
             !clusterBlacklistExists(g->nodename))
         {
             clusterNode *node;
             node = createClusterNode(g->nodename, flags);
             memcpy(node->ip,g->ip,NET_IP_STR_LEN);
             node->port = ntohs(g->port);
             node->cport = ntohs(g->cport);
             clusterAddNode(node);
         }
     }

     /* Next node */
     g++;
 }
}