文章目录
架构原理
主从复制
多个redis之间的节点中有且只有一个主节点,其他节点为从节点,主节点可以支持读写操作,从节点只有读操作,只要网络同步,主节点会一直将自己的数据更新到从节点,保持主从同步
存在问题:当主节点挂掉,需要手动的将节点进行切换
Sentinel哨兵模式
由于主从模式,基于主节点宕机的情况,整个集群丧失了可写的节点,哨兵模式,就是当主节点档期,备用节点可以变成主节点‘
哨兵功能:
(1)监控:不断检查主服务器和从服务器是否运作正常
(2)提醒:当监控摸个redis服务器出现问题,哨兵可以通过API向管理员或其他应用程序发送通知
(3)自动故障转移:当主服务器不能正常工作,哨兵会开始一次故障迁移操作,重新选举主服务器(该选举条件也是满足奇数条策略)

缺点:1.如果节点下线,哨兵不会对其进行故障转移,连接从节点的客户端也不能更新可用从节点的状态
2.无法实现动态扩容
cluster模式

1.所有的节点通过优化的二进制传输
2.集群的宕机是集群所有的节点宕机过半
3.客户端与redis节点直连,不需要中间proxy层,客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可。
4.redis-cluster把所有的物理节点映射到[0-16383]slot(哈希槽)上,cluster 负责维护node
安装
Redis 官网:https://redis.io/
进入redis的官网,下载 redis-6.2.1.tar.gz 包,上传到服务器上并解压,由于集群要求需要三主三从,因此最低需要六台
需要安装ruby环境
yum install ruby yum install rubygems
Linux一般默认GCC版本是4.8.5,Redis新版本使用了更高版本的GCC
yum install centos-release-scl scl-utils-build -y yum install -y devtoolset-8-toolchain scl enable devtoolset-8 bash
查看gcc版本
[root@c701 redis-6.2.1]# gcc --version gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)Copyright (C) 2018 Free Software Foundation, Inc.This is free software; see the source for copying conditions. There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
进入 /opt/module/redis-6.2.1 文件夹下执行
make
报错
[root@c701 redis-6.2.1]# make cd src && make all make[1]: 进入目录“/opt/module/redis-6.2.1/src” CC adlist.oIn file included from adlist.c:34:zmalloc.h:50:10: fatal error: jemalloc/jemalloc.h: 没有那个文件或目录 #include ^~~~~~~~~~~~~~~~~~~~~compilation terminated.make[1]: *** [Makefile:364:adlist.o] 错误 1make[1]: 离开目录“/opt/module/redis-6.2.1/src” make: *** [Makefile:6:all] 错误 2
使用
make MALLOC=libc
配置 redis.conf
# 不允许其他节点访问protected-mode no#端口port 7000#是否支持集群模式cluster-enabled yes cluster-config-file nodes-7000.conf cluster-node-timeout 5000daemonize yes pidfile /var/run/redis_7000.pid logfile "7000.log"dir /opt/data/redis_dir/7000/dat
启动每个结点redis服务
/opt/module/redis-6.2.1/src/redis-server /opt/data/redis_dir/7000/redis.conf /opt/module/redis-6.2.1/src/redis-server /opt/data/redis_dir/7001/redis.conf /opt/module/redis-6.2.1/src/redis-server /opt/data/redis_dir/7002/redis.conf /opt/module/redis-6.2.1/src/redis-server /opt/data/redis_dir/7003/redis.conf /opt/module/redis-6.2.1/src/redis-server /opt/data/redis_dir/7004/redis.conf /opt/module/redis-6.2.1/src/redis-server /opt/data/redis_dir/7005/redis.conf
创建集群
/opt/module/redis-6.2.1/src/redis-cli --cluster create 192.168.18.121:7001 192.168.18.121:7002 192.168.18.121:7003 192.168.18.121:7004 192.168.18.121:7005 192.168.18.121:7000 --cluster-replicas 1
输入yes
>>> Performing hash slots allocation on 6 nodes...Master[0] -> Slots 0 - 5460Master[1] -> Slots 5461 - 10922Master[2] -> Slots 10923 - 16383Adding replica 192.168.18.121:7005 to 192.168.18.121:7001Adding replica 192.168.18.121:7000 to 192.168.18.121:7002Adding replica 192.168.18.121:7004 to 192.168.18.121:7003>>> Trying to optimize slaves allocation for anti-affinity[WARNING] Some slaves are in the same host as their master M: f704dc96b7b7b61f53932dc146a3e502a06c809c 192.168.18.121:7001 slots:[0-5460] (5461 slots) master M: ca87505b289d4bc4e5c44f490a8f11d24e1c22bc 192.168.18.121:7002 slots:[5461-10922] (5462 slots) master M: 8fbf7e4d806dad189a5fbcc9157c134d6ca93e1e 192.168.18.121:7003 slots:[10923-16383] (5461 slots) master S: bf6b5253f29dd31377a1a6f2c417a8da41f33396 192.168.18.121:7004 replicates 8fbf7e4d806dad189a5fbcc9157c134d6ca93e1e S: 68e690f1e61753f6f192fc9bb1799e7d687d9ec0 192.168.18.121:7005 replicates f704dc96b7b7b61f53932dc146a3e502a06c809c S: c7b34accfae31c2133eb123e228ab8b55b7cbc7d 192.168.18.121:7000 replicates ca87505b289d4bc4e5c44f490a8f11d24e1c22bcCan I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join>>> Performing Cluster Check (using node 192.168.18.121:7001)M: f704dc96b7b7b61f53932dc146a3e502a06c809c 192.168.18.121:7001 slots:[0-5460] (5461 slots) master 1 additional replica(s)M: 8fbf7e4d806dad189a5fbcc9157c134d6ca93e1e 192.168.18.121:7003 slots:[10923-16383] (5461 slots) master 1 additional replica(s)S: bf6b5253f29dd31377a1a6f2c417a8da41f33396 192.168.18.121:7004 slots: (0 slots) slave replicates 8fbf7e4d806dad189a5fbcc9157c134d6ca93e1e S: c7b34accfae31c2133eb123e228ab8b55b7cbc7d 192.168.18.121:7000 slots: (0 slots) slave replicates ca87505b289d4bc4e5c44f490a8f11d24e1c22bc M: ca87505b289d4bc4e5c44f490a8f11d24e1c22bc 192.168.18.121:7002 slots:[5461-10922] (5462 slots) master 1 additional replica(s)S: 68e690f1e61753f6f192fc9bb1799e7d687d9ec0 192.168.18.121:7005 slots: (0 slots) slave replicates f704dc96b7b7b61f53932dc146a3e502a06c809c[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
连接集群
/opt/module/redis-6.2.1/src/redis-cli -c -h 192.168.18.121 -p 7001 -c
查看集群节点信息
192.168.18.121:7001> cluster nodes 8fbf7e4d806dad189a5fbcc9157c134d6ca93e1e 192.168.18.121:7003@17003 master - 0 1616088682562 3 connected 10923-16383bf6b5253f29dd31377a1a6f2c417a8da41f33396 192.168.18.121:7004@17004 slave 8fbf7e4d806dad189a5fbcc9157c134d6ca93e1e 0 1616088681458 3 connected c7b34accfae31c2133eb123e228ab8b55b7cbc7d 192.168.18.121:7000@17000 slave ca87505b289d4bc4e5c44f490a8f11d24e1c22bc 0 1616088681558 2 connected f704dc96b7b7b61f53932dc146a3e502a06c809c 192.168.18.121:7001@17001 myself,master - 0 1616088682000 1 connected 0-5460ca87505b289d4bc4e5c44f490a8f11d24e1c22bc 192.168.18.121:7002@17002 master - 0 1616088682462 2 connected 5461-1092268e690f1e61753f6f192fc9bb1799e7d687d9ec0 192.168.18.121:7005@17005 slave f704dc96b7b7b61f53932dc146a3e502a06c809c 0 1616088681000 1 connected
查看集群状态
192.168.18.121:7001> cluster info cluster_state:ok cluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_ping_sent:702cluster_stats_messages_pong_sent:717cluster_stats_messages_sent:1419cluster_stats_messages_ping_received:712cluster_stats_messages_pong_received:702cluster_stats_messages_meet_received:5cluster_stats_messages_received:1419
















