问题:

搭建Redis集群的过程中,执行到cluster create : … 的时候,发现程序发生阻塞,显示:Waiting for the cluster to join 的字样,然后就无休无尽的等待…

搭建Redis集群遇到的问题:Waiting for the cluster to join~~~_数据库

遇到这种情况大部分是因为集群总线的端口没有开放!

集群总线
每个Redis集群中的节点都需要打开两个TCP连接。一个连接用于正常的给Client提供服务,比如6379,还有一个额外的端口(通过在这个端口号上加10000)作为数据端口,例如:redis的端口为6379,那么另外一个需要开通的端口是:6379 + 10000, 即需要开启 16379。16379端口用于集群总线,这是一个用二进制协议的点对点通信信道。这个集群总线(Cluster bus)用于节点的失败侦测、配置更新、故障转移授权,等等。

解决方案:
知道了问题所在, 自然就知道如何去解决了, 只需要将开启Redis端口对应的 集群总线端口即可。例如: 6379 + 10000 = 16379。所以开放每个集群节点的客户端端口和集群总线端口才能成功创建集群!

防火墙策略
centos7.x

firewall-cmd --zone=public --add-port=7001/tcp --permanent
firewall-cmd --zone=public --add-port=7002/tcp --permanent
firewall-cmd --zone=public --add-port=7003/tcp --permanent
firewall-cmd --zone=public --add-port=7004/tcp --permanent
firewall-cmd --zone=public --add-port=7005/tcp --permanent
firewall-cmd --zone=public --add-port=7006/tcp --permanent
firewall-cmd --zone=public --add-port=17001/tcp --permanent
firewall-cmd --zone=public --add-port=17002/tcp --permanent
firewall-cmd --zone=public --add-port=17003/tcp --permanent
firewall-cmd --zone=public --add-port=17004/tcp --permanent
firewall-cmd --zone=public --add-port=17005/tcp --permanent
firewall-cmd --zone=public --add-port=17006/tcp --permanent
firewall-cmd --reload

搭建Redis集群遇到的问题:Waiting for the cluster to join~~~_网络_02


搭建Redis集群遇到的问题:Waiting for the cluster to join~~~_网络_03

搭建Redis集群遇到的问题:Waiting for the cluster to join~~~_3d_04

[root@VM-24-10-centos redis-cluster]# redis-cli -a pwd@2022gblfy --cluster create --cluster-replicas 1 192.168.0.80:7001 192.168.0.80:7004 192.168.0.80:7003 192.168.0.80:7006 192.168.0.80:7005 192.168.0.80:7002
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.0.80:7005 to 192.168.0.80:7001
Adding replica 192.168.0.80:7002 to 192.168.0.80:7004
Adding replica 192.168.0.80:7006 to 192.168.0.80:7003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 64ab97133edd09d0de90d7646f158cc0c27c6dbc 192.168.0.80:7001
slots:[0-5460] (5461 slots) master
M: a3fbce0abdd2555a966d778aece4827a70e133e9 192.168.0.80:7004
slots:[5461-10922] (5462 slots) master
M: 13f76c0f3b16f251ffc4802c3af89bb3dccfa664 192.168.0.80:7003
slots:[10923-16383] (5461 slots) master
S: d3e0ed7ec783a368602130af5ef405581b20246f 192.168.0.80:7006
replicates 13f76c0f3b16f251ffc4802c3af89bb3dccfa664
S: 3ab99d5dc392b2011b34a52478c4a8ef0152e187 192.168.0.80:7005
replicates 64ab97133edd09d0de90d7646f158cc0c27c6dbc
S: 8b2f5b506b1bdbef9e9fed5844a0bfdc5f88dcc6 192.168.0.80:7002
replicates a3fbce0abdd2555a966d778aece4827a70e133e9
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.0.80:7001)
M: 64ab97133edd09d0de90d7646f158cc0c27c6dbc 192.168.0.80:7001
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 8b2f5b506b1bdbef9e9fed5844a0bfdc5f88dcc6 192.168.0.80:7002
slots: (0 slots) slave
replicates a3fbce0abdd2555a966d778aece4827a70e133e9
M: a3fbce0abdd2555a966d778aece4827a70e133e9 192.168.0.80:7004
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 3ab99d5dc392b2011b34a52478c4a8ef0152e187 192.168.0.80:7005
slots: (0 slots) slave
replicates 64ab97133edd09d0de90d7646f158cc0c27c6dbc
M: 13f76c0f3b16f251ffc4802c3af89bb3dccfa664 192.168.0.80:7003
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: d3e0ed7ec783a368602130af5ef405581b20246f 192.168.0.80:7006
slots: (0 slots) slave
replicates 13f76c0f3b16f251ffc4802c3af89bb3dccfa664
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384