./redis-trib.rb create --replicas 1 192.168.112.33:8001 192.168.112.33:8002 192.168.112.33:8003 192.168.112.33:8004 192.168.112.33:8005 192.168.112.33:8006
原size
redis@linux-eqnz:~/cluster6> redis-cli -c -h 192.168.112.33 -p 8001 -a Woread#2018 dbsize
(integer) 992

====1.目标库清理及准备工作 =========

先删除集群密码,否则redis-trib.rb的某些功能不能用,因为redis-trib.rb暂时还不支持带密码操作,
后面可以用config set命令来设置密码

112.33:
sed -i '/masterauth "Woread#2018"/d' 6000/redis.conf
sed -i '/requirepass "Woread#2018"/d' 6000/redis.conf
sed -i '/masterauth "Woread#2018"/d' 6001/redis.conf
sed -i '/requirepass "Woread#2018"/d' 6001/redis.conf
112.34:
sed -i '/masterauth "Woread#2018"/d' 6002/redis.conf
sed -i '/requirepass "Woread#2018"/d' 6002/redis.conf
sed -i '/masterauth "Woread#2018"/d' 6003/redis.conf
sed -i '/requirepass "Woread#2018"/d' 6003/redis.conf
112.36:
sed -i '/masterauth "Woread#2018"/d' 6004/redis.conf
sed -i '/requirepass "Woread#2018"/d' 6004/redis.conf
sed -i '/masterauth "Woread#2018"/d' 6005/redis.conf
sed -i '/requirepass "Woread#2018"/d' 6005/redis.conf

然后启动所有节点
并迁移其他所有槽位到主节点(一定要根据当前集群的状况,将主节点的槽位都分配到其中一个主节点之上,比较稳妥的方法是:将集群中的所有备节点都删除或停掉,只保留主节点,这样就能避免备节点选为主节点)
当前的主节点分布情况如下:

M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots:0-5460 (5461 slots) master
 0 additional replica(s)
M: be5b41880afac9c41b09e0d4e3be1ce1eb00959a 192.168.112.34:6003
 slots:5461-10922 (5462 slots) master
 0 additional replica(s)
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:10923-16383 (5461 slots) master
 0 additional replica(s)这里我选择将所有槽位都分配给 192.168.112.36:6004 即:1d204c88a14a76dc30abb05025135f7e850f2a5d
./redis-trib.rb reshard --from bdd63e1f522d78eb1bb2574b2461a7302e14944a --to 1d204c88a14a76dc30abb05025135f7e850f2a5d --slots 5461 --yes 192.168.112.36:6004
./redis-trib.rb reshard --from be5b41880afac9c41b09e0d4e3be1ce1eb00959a --to 1d204c88a14a76dc30abb05025135f7e850f2a5d --slots 5462 --yes 192.168.112.36:6004./redis-trib.rb check 192.168.112.33:6000
S: bdeb2bfafe92d8bda295a5162f750e4cf9bddc9b 192.168.112.36:6005
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots: (0 slots) master
 0 additional replica(s)
S: be5b41880afac9c41b09e0d4e3be1ce1eb00959a 192.168.112.34:6003
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
S: 4ca3ced3aa1af88a453fd56493e07d8c0b84659e 192.168.112.34:6002
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
M: 9429439339bd2c3262cf48469f6912532faa1e02 192.168.112.33:6001
 slots: (0 slots) master
 0 additional replica(s)
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:0-16383 (16384 slots) master
 3 additional replica(s)

OK,槽位都迁移到了192.168.112.36:6004,这时就可以拷贝原库的 dump.rdb 文件到192.168.112.36:6004下了

先停集群:

redis-cli -c -h 192.168.112.33 -p 6000 shutdown
redis-cli -c -h 192.168.112.33 -p 6001 shutdown
redis-cli -c -h 192.168.112.34 -p 6002 shutdown
redis-cli -c -h 192.168.112.34 -p 6003 shutdown
redis-cli -c -h 192.168.112.36 -p 6004 shutdown
redis-cli -c -h 192.168.112.36 -p 6005 shutdown

将原库的dump.rdb复制到 192.168.112.36:6004的节点下:/home/redis/cluster6/dump/6004 (根据实际情况而定)
注意:还要将redis.conf中的参数:appendonly改为 no
因为:
只配置rdb,启动只加载dump文件恢复数据 no
只配置aof,重启时加载aof文件恢复数据
2者都配置,启动只加载aof文件恢复数据 yes

redis@PRD-RDS-112-36:~/cluster6> cat 6004/redis.conf |grep appendonly
appendonly yes改后如下:
sed -i 's/appendonly yes/appendonly no/g' 6004/redis.conf
redis@PRD-RDS-112-36:~/cluster6> cat 6004/redis.conf |grep appendonly
appendonly no然后启动所有节点,查看数据大小
redis@PRD-RDS-112-36:~/cluster6> ./dbsize.sh 
(integer) 0
(integer) 0
Could not connect to Redis at 192.168.112.34:6002: Connection refused
Could not connect to Redis at 192.168.112.34:6003: Connection refused
(integer) 843
Could not connect to Redis at 192.168.112.36:6005: Connection refused

OK,数据已经成功导入,之后我们还是得将appendonly改回 yes,在这之前,要先手动将p.rdb中的数据写入appendonly.aof(bgrewriteaof)

执行 bgrewriteaof前:appendonly.aof的大小为0

redis@PRD-RDS-112-36:~/cluster6> ls dump/6004/ -l
总用量 489864
-rw-r----- 1 redis redis 0 6月 22 11:18 appendonly.aof
-rw-r----- 1 redis redis 501122129 6月 22 16:07 dump.rdb执行 bgrewriteaof后:appendonly.aof的大小为583M,比dump.rdb文件要大
redis-cli -c -h 192.168.112.36 -p 6004 bgrewriteaof
redis@PRD-RDS-112-36:~/cluster6> du -sh dump/6004/*
583M dump/6004/appendonly.aof
401M dump/6004/dump.rdb

然后关闭这个节点,并更改appendonly参数:

redis-cli -c -h 192.168.112.36 -p 6004 shutdown
sed -i 's/appendonly no/appendonly yes/g' 6004/redis.conf


再启动节点,查看数据量

接下来要逆向操作,恢复原状了:

./redis-trib.rb reshard --from 1d204c88a14a76dc30abb05025135f7e850f2a5d --to bdd63e1f522d78eb1bb2574b2461a7302e14944a --slots 5461 --yes 192.168.112.36:6004
./redis-trib.rb reshard --from 1d204c88a14a76dc30abb05025135f7e850f2a5d --to 9429439339bd2c3262cf48469f6912532faa1e02 --slots 5462 --yes 192.168.112.36:6004
由于个别key值太大,导致迁移过程出现了错误
[ERR] IOERR error or timeout reading to target instance[WARNING] Node 192.168.112.36:6004 has slots in migrating state (6524).
[WARNING] Node 192.168.112.33:6001 has slots in importing state (6524).
[WARNING] The following slots are open: 6524

先修复一下集群:

redis@PRD-RDS-112-36:~/cluster6> ./redis-trib.rb fix 192.168.112.36:6004
再检查一下集群:
redis@PRD-RDS-112-36:~/cluster6> ./redis-trib.rb check 192.168.112.36:6004
/usr/local/lib/ruby/gems/2.4.0/gems/redis-3.2.1/lib/redis/client.rb:443: warning: constant ::Fixnum is deprecated
>>> Performing Cluster Check (using node 192.168.112.36:6004)
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:6525-16383 (9859 slots) master
 0 additional replica(s)
M: 9429439339bd2c3262cf48469f6912532faa1e02 192.168.112.33:6001
 slots:5461-6524 (1064 slots) master
 0 additional replica(s)
M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots:0-5460 (5461 slots) master
 0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

集群正常后,重新reshard,已经迁移了1064到192.168.112.33:6001,只需要再迁移4398个就可以了,如还有类型错误,重复前面的操作(先修复,再计算还要迁移多少,然后再reshard,最后别忘了check):

./redis-trib.rb reshard --from 1d204c88a14a76dc30abb05025135f7e850f2a5d --to be5b41880afac9c41b09e0d4e3be1ce1eb00959a --slots 4398 --yes 192.168.112.36:6004
最后dbsize一下,看看key的总数是否正确:
redis@PRD-RDS-112-36:~/cluster6> ./dbsize.sh 
(integer) 218
(integer) 208
Could not connect to Redis at 192.168.112.34:6002: Connection refused
Could not connect to Redis at 192.168.112.34:6003: Connection refused
(integer) 212

总数:218+208+212=638,一个不差(过了一段时间,可以会少几个,不用担心,这是有些key设置的exprid,过期了)

redis@PRD-RDS-112-36:~/cluster6> ./redis-trib.rb check 192.168.112.33:6000
/usr/local/lib/ruby/gems/2.4.0/gems/redis-3.2.1/lib/redis/client.rb:443: warning: constant ::Fixnum is deprecated
>>> Performing Cluster Check (using node 192.168.112.33:6000)
M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots:0-5460 (5461 slots) master
 0 additional replica(s)
S: bdeb2bfafe92d8bda295a5162f750e4cf9bddc9b 192.168.112.36:6005
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
S: 4ca3ced3aa1af88a453fd56493e07d8c0b84659e 192.168.112.34:6002
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
M: 9429439339bd2c3262cf48469f6912532faa1e02 192.168.112.33:6001
 slots:5461-10922 (5462 slots) master
 0 additional replica(s)
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:10923-16383 (5461 slots) master
 3 additional replica(s)
S: be5b41880afac9c41b09e0d4e3be1ce1eb00959a 192.168.112.34:6003
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered

检查后发现,集群是正常了,但是所有备节点都指向了192.168.112.36:6004主节点,需要重新调整一下:

目标:192.168.112.36:6004 192.168.112.34:6002
 192.168.112.33:6000 192.168.112.36:6005
 192.168.112.33:6001 192.168.112.34:6003
redis-cli -c -h 192.168.112.36 -p 6005 CLUSTER REPLICATE bdd63e1f522d78eb1bb2574b2461a7302e14944a
redis-cli -c -h 192.168.112.34 -p 6003 CLUSTER REPLICATE 9429439339bd2c3262cf48469f6912532faa1e02再check一下:
redis@PRD-RDS-112-36:~/cluster6> ./redis-trib.rb check 192.168.112.33:6000
/usr/local/lib/ruby/gems/2.4.0/gems/redis-3.2.1/lib/redis/client.rb:443: warning: constant ::Fixnum is deprecated
>>> Performing Cluster Check (using node 192.168.112.33:6000)
M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots:0-5460 (5461 slots) master
 1 additional replica(s)
S: bdeb2bfafe92d8bda295a5162f750e4cf9bddc9b 192.168.112.36:6005
 slots: (0 slots) slave
 replicates bdd63e1f522d78eb1bb2574b2461a7302e14944a
S: 4ca3ced3aa1af88a453fd56493e07d8c0b84659e 192.168.112.34:6002
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
M: 9429439339bd2c3262cf48469f6912532faa1e02 192.168.112.33:6001
 slots:5461-10922 (5462 slots) master
 1 additional replica(s)
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:10923-16383 (5461 slots) master
 1 additional replica(s)
S: be5b41880afac9c41b09e0d4e3be1ce1eb00959a 192.168.112.34:6003
 slots: (0 slots) slave
 replicates 9429439339bd2c3262cf48469f6912532faa1e02
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

OK,正常了,但还有一个问题,就是有两个主节点落在了112.33服务器上。这对于集群的高可用没有影响,但从负载均衡的角度来看,34服务器无,而33承担了两份压力(自己的和34的)。
当然,处理起来也简单,只需要停掉192.168.112.33:6001节点,等192.168.112.34:6003成为主节点后再启动192.168.112.33:6001节点就OK了。
以下是操作过程:

redis-cli -c -h 192.168.112.33 -p 6001 shutdown
redis@PRD-RDS-112-36:~/cluster6> ./redis-trib.rb check 192.168.112.33:6000
/usr/local/lib/ruby/gems/2.4.0/gems/redis-3.2.1/lib/redis/client.rb:443: warning: constant ::Fixnum is deprecated
>>> Performing Cluster Check (using node 192.168.112.33:6000)
M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots:0-5460 (5461 slots) master
 1 additional replica(s)
S: bdeb2bfafe92d8bda295a5162f750e4cf9bddc9b 192.168.112.36:6005
 slots: (0 slots) slave
 replicates bdd63e1f522d78eb1bb2574b2461a7302e14944a
S: 4ca3ced3aa1af88a453fd56493e07d8c0b84659e 192.168.112.34:6002
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:10923-16383 (5461 slots) master
 1 additional replica(s)
M: be5b41880afac9c41b09e0d4e3be1ce1eb00959a 192.168.112.34:6003
 slots:5461-10922 (5462 slots) master
 0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.redis@linux-eqnz:~/cluster6> cd 6001
redis@linux-eqnz:~/cluster6/6001> pwd
/home/redis/cluster6/6001
redis@linux-eqnz:~/cluster6/6001> redis-server redis.confredis@PRD-RDS-112-36:~/cluster6> ./redis-trib.rb check 192.168.112.33:6000
/usr/local/lib/ruby/gems/2.4.0/gems/redis-3.2.1/lib/redis/client.rb:443: warning: constant ::Fixnum is deprecated
>>> Performing Cluster Check (using node 192.168.112.33:6000)
M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots:0-5460 (5461 slots) master
 1 additional replica(s)
S: bdeb2bfafe92d8bda295a5162f750e4cf9bddc9b 192.168.112.36:6005
 slots: (0 slots) slave
 replicates bdd63e1f522d78eb1bb2574b2461a7302e14944a
S: 4ca3ced3aa1af88a453fd56493e07d8c0b84659e 192.168.112.34:6002
 slots: (0 slots) slave
 replicates 1d204c88a14a76dc30abb05025135f7e850f2a5d
S: 9429439339bd2c3262cf48469f6912532faa1e02 192.168.112.33:6001
 slots: (0 slots) slave
 replicates be5b41880afac9c41b09e0d4e3be1ce1eb00959a
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:10923-16383 (5461 slots) master
 1 additional replica(s)
M: be5b41880afac9c41b09e0d4e3be1ce1eb00959a 192.168.112.34:6003
 slots:5461-10922 (5462 slots) master
 1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

最后一步设置密码,执行以下命令(每个节点,包括备节点):

redis-cli -c -h 192.168.112.33 -p 6000 config set masterauth Woread#2018
redis-cli -c -h 192.168.112.33 -p 6000 config set requirepass Woread#2018
redis-cli -c -h 192.168.112.33 -p 6000 config rewriteredis-cli -c -h 192.168.112.33 -p 6001 config set masterauth Woread#2018
redis-cli -c -h 192.168.112.33 -p 6001 config set requirepass Woread#2018
redis-cli -c -h 192.168.112.33 -p 6001 config rewriteredis-cli -c -h 192.168.112.34 -p 6002 config set masterauth Woread#2018
redis-cli -c -h 192.168.112.34 -p 6002 config set requirepass Woread#2018
redis-cli -c -h 192.168.112.34 -p 6002 config rewriteredis-cli -c -h 192.168.112.34 -p 6003 config set masterauth Woread#2018
redis-cli -c -h 192.168.112.34 -p 6003 config set requirepass Woread#2018
redis-cli -c -h 192.168.112.34 -p 6003 config rewriteredis-cli -c -h 192.168.112.36 -p 6004 config set masterauth Woread#2018
redis-cli -c -h 192.168.112.36 -p 6004 config set requirepass Woread#2018
redis-cli -c -h 192.168.112.36 -p 6004 config rewriteredis-cli -c -h 192.168.112.36 -p 6005 config set masterauth Woread#2018
redis-cli -c -h 192.168.112.36 -p 6005 config set requirepass Woread#2018
redis-cli -c -h 192.168.112.36 -p 6005 config rewrite

接下来:查看日志有无报错!集群迁移完成(操作熟练后,10分钟内可以完成)。

 

---------- 总结一下步骤(使用dump.rdb迁移,并启用appendonly模式):
前期准备:
1.停掉所有节点(先备后主),然后删除所有节点下的appendonly.aof和dump.rdb文件,启动所有主节点,停掉所有备节点
2.将所有槽位分配到一个主节点
3.目标库修改appendonly为no,有密码就先不用密码
停业务迁移:
4.停业务(此时可以同步对所有应用修改redis配置(钉钉,管理门户,搜索引擎))
5.bgsave --源端(2分钟)
6.复制发送备份dump.rdb  --源端,目标端(5分钟)
7.启动目标集群 --目标端(5分钟)
8.bgrewriteaof --目标端(1分钟)
9.停节点 --目标端(1分钟)
10.修改appendonly为yes --目标端(5分钟)
11.启动集群,检查dbsize --目标端(5分钟)
12.reshared槽位,启动备节点,设置密码 --目标端(5分钟)
恢复应用:
13.启动所有应用,并验证 --应用(10分钟)

---------- 总结一下步骤(使用appendonly.aof迁移):
前期准备:
1) 停掉所有节点(先备后主),然后删除所有节点下的appendonly.aof和dump.rdb文件,再动所有主节点,停掉所有节点(有密码的话,最好先不要用密码)
2) 确定目标库所有节点appendonly为yes,将所有槽位分配到一个主节点
3) 停掉拥有所有槽位的那个节点,等待appendonly.aof文件
停业务迁移:
4) 停业务,时可以同步对所有应用修改redis配置(钉钉,管理门户,搜索引擎)
5) 源库手动触发bgrewriteaof 源端(2分钟)
6) 复制发送备份appendonly.aof  源端,目标端(5分钟)
7) 启动目标节点 目标端(5分钟)
8) reshared槽位,启动备节点,设置密码 目标端(5分钟)
恢复应用:
9) 启动所有应用,并验证

--192.168.112.33 redis用户:
redis-cli -c -h 192.168.112.33 -p 8001 -a Woread#2018 bgrewriteaof
scp /home/redis/cluster3/8001/appendonly.aof redis@192.168.112.36:--192.168.112.36 redis用户:
cd cluster6
cp ~/appendonly.aof dump/6004/
cd 6004
redis-server redis.conf均衡槽位:
./redis-trib.rb reshard --from 1d204c88a14a76dc30abb05025135f7e850f2a5d --to bdd63e1f522d78eb1bb2574b2461a7302e14944a --slots 5461 --yes 192.168.112.36:6004
./redis-trib.rb reshard --from 1d204c88a14a76dc30abb05025135f7e850f2a5d --to be5b41880afac9c41b09e0d4e3be1ce1eb00959a --slots 5462 --yes 192.168.112.36:6004./redis-trib.rb reshard --from bdd63e1f522d78eb1bb2574b2461a7302e14944a --to 1d204c88a14a76dc30abb05025135f7e850f2a5d --slots 5461 --yes 192.168.112.36:6004
./redis-trib.rb reshard --from be5b41880afac9c41b09e0d4e3be1ce1eb00959a --to 1d204c88a14a76dc30abb05025135f7e850f2a5d --slots 5462 --yes 192.168.112.36:6004--如有迁移失败的则:
./redis-trib.rb fix 192.168.112.36:6004
./redis-trib.rb reshard --from 1d204c88a14a76dc30abb05025135f7e850f2a5d --to be5b41880afac9c41b09e0d4e3be1ce1eb00959a --slots 4398 --yes 192.168.112.36:6004./redis-trib.rb check 192.168.112.36:6004
redis-cli -c -h 192.168.112.36 -p 6004 shutdownredis-cli -c -h 192.168.112.33 -p 6000 bgrewriteaof
redis-cli -c -h 192.168.112.34 -p 6003 bgrewriteaof
redis-cli -c -h 192.168.112.36 -p 6004 bgrewriteaof6524槽位移动失败
redis@PRD-RDS-112-36:~/cluster6> ./redis-trib.rb check 192.168.112.36:6004
/usr/local/lib/ruby/gems/2.4.0/gems/redis-3.2.1/lib/redis/client.rb:443: warning: constant ::Fixnum is deprecated
>>> Performing Cluster Check (using node 192.168.112.36:6004)
M: 1d204c88a14a76dc30abb05025135f7e850f2a5d 192.168.112.36:6004
 slots:6524-16383 (9860 slots) master
 0 additional replica(s)
M: be5b41880afac9c41b09e0d4e3be1ce1eb00959a 192.168.112.34:6003
 slots:5461-6523 (1063 slots) master
 0 additional replica(s)
M: bdd63e1f522d78eb1bb2574b2461a7302e14944a 192.168.112.33:6000
 slots:0-5460 (5461 slots) master
 0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
[WARNING] Node 192.168.112.36:6004 has slots in migrating state (6524).
[WARNING] Node 192.168.112.34:6003 has slots in importing state (6524).
[WARNING] The following slots are open: 6524
>>> Check slots coverage...
[OK] All 16384 slots covered.

我们看一下这个solt 里边都有哪些key:

redis-cli -c -h 192.168.112.36 -p 6004 CLUSTER GETKEYSINSLOT 6524 100
只有一个key
redis@PRD-RDS-112-36:~/cluster6> redis-cli -c -h 192.168.112.36 -p 6004 CLUSTER GETKEYSINSLOT 6524 100
1) "h:remote:cache"

看下key的大小:

redis-cli -c -h 192.168.112.36 -p 6004 debug object h:remote:cache
Value at:0x7fcb019881a0 refcount:1 encoding:hashtable serializedlength:46808489 lru:3191472 lru_seconds_idle:104
46808489/1024/1024,44个G左右,怪不得导入失败了,应该是超时了。
修复一下:
./redis-trib.rb fix 192.168.112.36:6004
修复后,6524槽移动到了192.168.112.34:6003节点
再看下这个key的大小
redis@PRD-RDS-112-36:~/cluster6> redis-cli -c -h 192.168.112.34 -p 6003 debug object h:remote:cache
Value at:0x7f0b374c4d30 refcount:1 encoding:hashtable serializedlength:46808489 lru:3191731 lru_seconds_idle:95


很好,没有丢失。

 

 

------- 小技巧 ----------

1.集群迁移时,可以先将源端和目标端的所有槽位都分移动到其中一个节点,这样数据的迁移就相当于单实例之间的迁移了,相对就简单的多了。
2.集群迁移时,应先关掉所有备节点,这样备节点的配置可以改为最终正确的参数值,并且还能避免主节点在修改配置文件并重启主节点的时候发生节点故障转换(备节点成为主节点)

我们看一下这个solt 里边都有哪些key:
CLUSTER GETKEYSINSLOT 6524 100
然后呢,我们看一下每一个key序列化后都占了多大的空间:
DEBUG OBJECT h:remote:cache

 

./redis-trib.rb reshard --from fc952a1b0942ee91fa878578bba8663e2662ee3a --to e7129d8973c1b81a0cf534bbf59fd47df2599d88 --slots 5461 --yes 192.168.112.33:8004
redis-cli -c -h 192.168.112.33 -p 8004 -a Woread#2018

集群创建前需要修改gem的redis工具下的一个文件,我这里是默认安装,路径如下:/usr/lib/ruby/gems/1.8/gems/redis-3.2.1/lib/redis/client.rb ,修改内容如下:

linux-eqnz:~ # find / -name client.rb
/usr/local/redis-cluster/ruby-2.4.0/gems/xmlrpc-0.2.1/lib/xmlrpc/client.rb
/usr/local/lib/ruby/gems/2.4.0/gems/xmlrpc-0.2.1/lib/xmlrpc/client.rb
/usr/local/lib/ruby/gems/2.4.0/gems/redis-3.2.1/lib/redis/client.rb
/usr/lib64/ruby/1.8/xmlrpc/client.rb

至于已运行的集群,如何添加密码

对每一个节点用命令设置密码或修改每一个节点的配置文件中密码项后重启,需要验证

 

注意事项:
1.如果是使用redis-trib.rb工具构建集群,集群构建完成前不要配置密码,集群构建完毕再通过config set + config rewrite命令逐个机器设置密码
2.如果对集群设置密码,那么requirepass和masterauth都需要设置,否则发生主从切换时,就会遇到授权问题,可以模拟并观察日志
3.各个节点的密码都必须一致,否则Redirected就会失败

config set masterauth Woread#2018 
config set requirepass abc 
config rewrite

富余的slave会迁移到其他master节点
所以简而言之你应该了解关于复制迁移的哪些方面?

集群在迁移的时候会尝试去迁移拥有最多slave数量的master旗下的slave。
想利用复制迁移特性来增加系统的可用性,你只需要增加一些slave节点给单个master(哪个master节点并不重要)。
复制迁移是由配置项cluster-migration-barrier控制的