RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_单点故障

服务器IP

hostname

节点说明

端口

管控台地址

账号

密码

192.168.0.115

mq-01

rabbitmq master

5672

​http://192.168.0.115:15672​

guest

guest

192.168.0.117

mq-02

rabbitmq slave

5672

​http://192.168.0.117:15672​

guest

guest

192.168.0.118

mq-03

rabbitmq slave

5672

​http://192.168.0.118:15672​

guest

guest

192.168.0.119

hk-01

haproxy+keepalived

8100

​http://192.168.0.119:8100/rabbitmq-stats​

admin

123456

192.168.0.120

hk-02

haproxy+keepalived

8100

​http://192.168.0.120:8100/rabbitmq-stats​

admin

123456

命令

说明

sudo service keepalived start

启动keepalived 服务

sudo service keepalived stop

停止keepalived 服务

sudo service keepalived restart

重新启动keepalived 服务

sudo service keepalived status

查看keepalived 服务运行状态

sudo chkconfig keepalived on

keepalived 服务开机启动

命令

说明

sudo systemctl start keepalived.service

启动keepalived 服务

sudo systemctl stop keepalived.service

停止keepalived 服务

sudo systemctl restart keepalived.service

重新启动keepalived 服务

sudo systemctl status keepalived.service

查看keepalived 服务运行状态

sudo systemctl enable keepalived.service

keepalived 服务开机启动

sudo systemctl disable keepalived.service

keepalived 取消服务开机启动

接上一篇:​​RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成负载均衡组件 Ha-Proxy_02​

文章目录

一、Keepalived简介

Keepalived,它是一个高性能的服务器高可用或热备解决方案,Keepalived主要来防止服务器单点故障的发生问题,可以通过其与Nginx、Haproxy等反向代理的负载均衡服务器配合实现web服务端的高可用。Keepalived以VRRP协议为实现基础,用VRRP协议来实现高可用性(HA).VRRP(Virtual Router Redundancy Protocol)协议是用于实现路由器冗余的协议,VRRP协议将两台或多台路由器设备虚拟成一个设备,对外提供虚拟路由器IP(一个或多个)。

二、Keepalived 安装实战

PS:下载地址:
​​​http://www.keepalived.org/download.html​

2.1. 安装所需软件包

yum install -y openssl openssl-devel

2.2. 下载keepalived 软件包

wget https://www.keepalived.org/software/keepalived-2.0.20.tar.gz

2.3. 同步keepalived 软件包

为了节省时间,将此软件包同步120服务器

scp keepalived-2.0.20.tar.gz root@192.168.0.120:/app/software

2.4. 解压、编译、安装 keepalived

# 解压keepalived
tar -zxf keepalived-2.0.20.tar.gz -C /app/
# 编译、安装 keepalived
cd keepalived-2.0.20/ && ./configure --prefix=/app/keepalived
make && make install

三、将keepalived安装成Linux系统服务

将keepalived安装成Linux系统服务,因为没有使用keepalived的默认安装路径(默认路径:/usr/local),安装完成之后,需要做一些修改工作

3.1. 首先创建文件夹,将keepalived配置文件进行复制

# 创建文件夹
mkdir /etc/keepalived
# 将keepalived配置文件进行复制
cp /app/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

3.2. 然后复制 keepalived 脚本文件

cp /app/keepalived-2.0.20/keepalived/etc/init.d/keepalived /etc/init.d/
cp /app/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
# 建立软连接
ln -s /app/sbin/keepalived /usr/sbin/
# 由于系统默认建立软连接,因此需要先删除默认的
rm -f /sbin/keepalived
# 和自己指定安装的脚本文件建立软连接
ln -s /app/keepalived/sbin/keepalived /sbin/

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_单点故障_02

3.3. keepalived 服务设置开机启动

可以设置开机启动:​​systemctl enable keepalived.service​​,到此我们安装完毕!

systemctl enable keepalived.service

四、配置+修改 Keepalived配置文件

PS:修改keepalived.conf配置文件

4.1. 创建并编辑keepalived.conf文件

vim /etc/keepalived/keepalived.conf

4.2. 119 服务器配置

! Configuration File for keepalived

global_defs {
router_id hk-01 ##标识节点的字符串,通常为hostname

}

vrrp_script chk_haproxy {
script "/etc/keepalived/haproxy_check.sh" ##执行脚本位置
interval 2 ##检测时间间隔
weight -20 ##如果条件成立则权重减20
}
#监测haproxy进程状态,每2秒执行一次
vrrp_instance VI_1 {
state MASTER ## 主节点为MASTER,备份节点为BACKUP
interface ens33 ## 绑定虚拟IP的网络接口(网卡),与本机IP地址所在的网络接口相同(我这里是eth0)
virtual_router_id 119 ## 虚拟路由ID号(主备节点一定要相同)
mcast_src_ip 192.168.0.119 ## 本机ip地址
priority 100 ##优先级配置(0-254的值)
nopreempt
advert_int 1 ## 组播信息发送间隔,俩个节点必须配置一致,默认1s
authentication { ## 认证匹配
auth_type PASS
auth_pass ncl@1234
}

track_script {
chk_haproxy
}

virtual_ipaddress {
192.168.0.112 ## 虚拟ip,可以指定多个
}
}

4.3. 同步+修改 keepalived.conf 配置文件

将此配置文件同步120服务器

scp  keepalived.conf root@192.168.0.120:/etc/keepalived/

修改第一处:将router_id 修改为120 服务器hostname
修改第二处:mcast_src_ip 修改为120 本机ip地址
修改第三处:priority 修改为 90 ## 主节点 100 从节点 90
修改第四处:state 将 MASTER 修改为BACKUP

120服务器配置:

! Configuration File for keepalived

global_defs {
router_id hk-02 ##标识节点的字符串,通常为hostname

}
#监测haproxy进程状态,每2秒执行一次
vrrp_script chk_haproxy {
script "/etc/keepalived/haproxy_check.sh" ##执行脚本位置
interval 2 ##检测时间间隔
weight -20 ##如果条件成立则权重减20
}

vrrp_instance VI_1 {
state BACKUP ## 主节点为MASTER,备份节点为BACKUP
interface ens33 ## 绑定虚拟IP的网络接口(网卡),与本机IP地址所在的网络接口相同(我这里是eth0)
virtual_router_id 119 ## 虚拟路由ID号(主备节点一定要相同)
mcast_src_ip 192.168.0.120 ## 本机ip地址
priority 90 ##优先级配置(0-254的值)
nopreempt
advert_int 1 ## 组播信息发送间隔,俩个节点必须配置一致,默认1s
authentication { ## 认证匹配
auth_type PASS
auth_pass ncl@1234
}

track_script {
chk_haproxy
}

virtual_ipaddress {
192.168.0.112 ## 虚拟ip,可以指定多个
}
}

4.4. 执行脚本编写

PS:添加文件位置为​​/etc/keepalived/haproxy_check.sh​​(119、120两个节点文件内容一致即可)

vim /etc/keepalived/haproxy_check.sh
#!/bin/bash
COUNT=`ps -C haproxy --no-header |wc -l`
if [ $COUNT -eq 0 ];then
/app/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg
sleep 2
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
killall keepalived
fi
fi

4.5. 脚本说明:

  1. 监测haproxy进程状态,每2秒执行一次
  2. 如果存在,则跳过逻辑
  3. 如果不存在,重新启动 haproxy 服务

4.6. 执行脚本赋权

PS:haproxy_check.sh脚本授权,赋予可执行权限.

chmod +x /etc/keepalived/haproxy_check.sh

五、启动 keepalived 服务

PS:启动keepalived之前先检查haproxy启运行状态

5.1. 查看haproxy启运行状态

命令查看方式:

ps -ef |

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_keepalived_03


RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_keepalived_04


浏览器查看方式:

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_配置文件_05


RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_keepalived_06

从上面可以看出haproxy 已经正常运行

5.2. 启动keepalived

PS:当我们启动俩个haproxy节点以后,我们可以启动keepalived服务程序

# 启动两台机器的keepalived
service keepalived start

5.3. 查看keepalived运行状态

ps -ef | grep keepalived

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_keepalived_07


RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_keepalived_08

六、测试+验证 keepalived 单点故障转移

6.1. 正常场景测试

测试结果预测:

keepalived 服务正常启动,虚拟ip在 主节点的服务器上(119 服务器)

在119服务器 查看虚拟IP

ip a

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_单点故障_09

在120服务器上是没有虚拟IP的(除非主节点119 keepalived服务停止)
在119服务器 查看虚拟IP

ip a

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_单点故障_10

6.2. 主节点出现单点故障 测试

测试结果预测:

1. keepalived 服务正常启动,虚拟ip在 主节点的服务器上(119 服务器)
2. 当主节点出现单点故障后,虚拟ip会漂移到BACKUP节点

模拟虚拟IP漂移到 120服务器场景

停止主节点119 keepalived服务停止

service keepalived stop

再次查看119 是否存在虚拟IP

ip a

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_服务器_11


再次查看120 虚拟IP是否漂移成功

ip a

RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03_keepalived_12

从上面模拟测试结果,符合咱们的预测!!!

6.3. 主节点出现单点故障+重新启动主节点 测试

1. keepalived 服务正常启动,虚拟ip在 主节点的服务器上(119 服务器)
2. 当主节点出现单点故障后,虚拟ip会漂移到BACKUP节点
3. 当主节点单点故障修复后,由于咱们设置了权重,主节点权重比从节点权重大,因此,虚拟IP会重新回到主节点服务器上

二次模拟测试,权重是否设置正常
119 权重权重 100
120 设置权重 90

yuce测试结果:
当主节点119 keepalived节点再次启动,虚拟ip又会回到主节点服务器上

主节点119服务器再次启动 keepalived 服务,进行模拟测试

[root@hk-01 keepalived]# service keepalived start
Starting keepalived (via systemctl): [ OK ]
[root@hk-01 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP>

120 服务器 查看测试结果

[root@hk-02 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP>