角色分配:

10.40.42.103  twemproxy  keepalived-backup

10.40.42.127  twemproxy  keepalived-master

10.40.42.128  vip (ping下本地网络没有被使用的IP)

部署twemproxy :

10.40.42.103和10.40.42.127已经部署。

[root@master twemproxy]# cat nutcracker.yml

redis_test:

  listen: 0.0.0.0:22121

  hash: fnv1a_64

  distribution: ketama

  auto_eject_hosts: true

  redis: true

  redis_auth: mldnjava

  server_retry_timeout: 2000

  server_failure_limit: 1

  servers:

    - 10.40.42.105:6379:1

- 10.40.42.127:6379:1

另一台:

[root@node2 twemproxy]# cat nutcracker.yml

redis_test:

  listen: 0.0.0.0:22121

  hash: fnv1a_64

  distribution: ketama

  auto_eject_hosts: true

  redis: true

  #redis_auth: mldnjava

  server_retry_timeout: 2000

  server_failure_limit: 1

  servers:

    - 10.40.42.105:6379:1

- 10.40.42.127:6379:1

部署keepalived:

2台都采用yum直接安装

yum -y install keepalived

node的keepalived.conf配置文件,node2作为keepaliveed的master:

[root@node2 twemproxy]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_port 
     script "/usr/local/twemproxy/chk_nutcracker.sh"
     interval 10
     weight -1

}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        10.40.42.128
    }

    track_script {
     chk_port
   }

}

master的keepalived.conf配置文件,master作为keepaliveed的backup:

[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_port {
     script "/usr/local/twemproxy/chk_nutcracker.sh"
     interval 10
     weight -1
}

vrrp_instance VI_1 {
    state backup
    interface ens192
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        10.40.42.128
    }

    track_script {
     chk_port
   }


}

chk_nutcracker.sh脚本内容:

其实没有脚本这段配置也是可以的,测试直接把keepalived服务down掉就行。

 

[root@master twemproxy]# cat chk_nutcracker.sh
#!/bin/bash
ps -C nutcracker
if [[ $? -eq 0 ]];then
     exit 0
else
     /usr/local/twemproxy/sbin/nutcracker -c /usr/local/twemproxy/nutcracker.yml -o /usr/local/twemproxy/twemproxy.log -d
     sleep 3
     ps -C nutcracker
     if [[ $? -eq 0 ]];then
          exit 0
     else
          exit 1
     fi
fi

启动keepalived服务:

systemctl start keepalived

查看vip:

[root@node2 twemproxy]# ip a | grep 128
    inet6 ::1/128 scope host
    inet 10.40.42.128/32 scope global ens192

Redis集群高可用(Keepalived+Twemproxy)_keepalived

测试vip漂移:

把脚本中启动服务注释掉:

Redis集群高可用(Keepalived+Twemproxy)_keepalived_02

 

关闭当前twemproxy服务,vip漂移:

当前主机Vip已经查不到了:

# ip a | grep 128

Redis集群高可用(Keepalived+Twemproxy)_keepalived_03

Backup已经查看到vip地址:

# ip a | grep 128

Redis集群高可用(Keepalived+Twemproxy)_Redis集群高可用_04

Keepalived mastertwemproxy服务重启启动:

Twemproxy服务重新启动起来,但是vip并没有漂移过来。

Redis集群高可用(Keepalived+Twemproxy)_Redis集群高可用_05

重启keepalived服务:

重启keepalived服务,发现vip又飘回来了。

[root@node2 twemproxy]# systemctl restart keepalived
[root@node2 twemproxy]# ip a | grep 128
    inet6 ::1/128 scope host
    inet 10.40.42.128/32 scope global ens192

Redis集群高可用(Keepalived+Twemproxy)_keepalived_06

通过vip登录redis集群:

[root@node2 bin]# ./redis-cli -h 10.40.42.128 -p 22121
10.40.42.128:22121> info
Error: Server closed the connection
10.40.42.128:22121> get mew
"happy"

Redis集群高可用(Keepalived+Twemproxy)_Redis集群高可用_07