在实际环境中,keepalive常常与lvs,nginx,haproxy,Mysql等等应用组成高可用计算集群服务,比如web前端应用等等场景,今天一起来讲讲关于keepalive+lvs实例部署

 

Keepalive+Lvs(lvs/dr模式)实例部署

如图所示为整体的拓扑图:


LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive

一.部署前说明:

(1)系统版本: centos 6.6(64位)

(2)角色及ip相关信息:

角色名网络ip信息
客户端(CIP)192.168.0.242/24
Lvs_Master_DReth0:172.51.96.105/24 &&  eth1:192.168.0.105/24
Lvs_Backup_DReth0:172.51.96.119/24 &&  eth1:192.168.0.119/24
RS_RIP1eth0:172.51.96.235/24 &&  eth1:192.168.0.235/24
RS_RIP2eth0:172.51.96.236/24 &&  eth1:192.168.0.236/24
LVS_VIP192.168.0.88/32

(3)相关中间件信息

keepalive版本信息: keepalived-1.2.15

httpd版本信息:   httpd-2.2  (提供http服务)

ipvsadm版本信息: ipvsadm-1.2.1


二.部署操作:

负载均衡器上配置操作

(1)分别在Lvs_Master_DR和Lvs_backup_DR上安装Keepalive,ipvsadm所需要的相关依赖包:

# yum install openssl-devel popt-devel libnl-devel kernel-devel  -y

(2)分别在Lvs_Master_DR和Lvs_backup_DR上安装Keepalive以及ipvsadm,如下:

1. 安装ipvsadm软件

# yum install  ipvsadm  -y

2. 编译安装keepalive

1.1 keepalived的源码获取 

keepalived源码包我们可以到keepalived的官网:http://www.keepalived.org/去下载,相关说明文档亦可在其官网查看,比如keepalived的使用,相关配置说明,这里演示的版本为:1.2.15

# cd ~
# wget http://www.keepalived.org/software/keepalived-1.2.15.tar.gz

1.2 编译安装keepalived

<--编译安装keepalived-->

# ln -s /usr/src/kernels/2.6.32-573.18.1.el6.x86_64/ /usr/src/linux
# tar zxvf keepalived-1.2.15.tar.gz -C /usr/local/src
# cd /usr/local/src/keepalived-1.2.15/
# ./configure \
  --prefix=/usr/local/keepalived \
  --with-kernel-dir=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64
# make  make install

<--对keepalived进行相关路径优化调整-->

<---拷贝keepalived相关启动命令--->
# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

<---将keepalived启动脚本添加到系统服务--->

# cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
# chkconfig --add keepalived
# chkconfig --level 2345 keepalived on

<---创建keepalived相关配置文件--->
# mkdir -p /etc/keepalived
# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived

备注说明:
1.keepalived安装完成后,相关路径如下:
安装目录为:/usr/lccal/keepalived, 配置文件路径为:/etc/keepalive/
2.安装完成后,需要将启动脚本复制到/etc/ini.d/下
3.注意一定要执行述上的相关操作,不然有可能导致keepalived服务起不来

1.3  启动keepalived服务

# service keepalived start

(3)分别配置Lvs_Master_DR以及Lvs_Backup_DR上的keepalive实例,如下所示:

1. Lvs_master_dr配置代码示例(主调度器)

vim /etc/keepalived/keepalived.conf

内容如下

! Configuration File for keepalived

global_defs {
   notification_email {
       admin@bluemobi.cn
   }
   notification_email_from  lvs_admin@bluemobi.cn
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id DR_MASTER
}

vrrp_script check_nginx {
   script "/etc/keepalived/scripts/check_nginx.sh"
   interval 3
   weight -5
}

############################################################################################
# vrrp_script check_nginx {                         #表示创建一个脚本check_nginx
   script "/etc/keepalived/scripts/check_nginx.sh"  #引用的脚本路径
   interval 3  #表示检测时间的间隔为3s
   weight -5   #当脚本执行结果失败,则优先级降低5级
}
############################################################################################

vrrp_instance http {
    state BACKUP
    interface eth0
    lvs sync daemon interface eth0
############################################################################################
# lvs sync daemon interface eth0:类似HA心跳检测的端口
############################################################################################
    dont_track_primary
    nopreempt
############################################################################################
# nopreempt: 不抢占master
############################################################################################
    track_interface {
    eth0
    eth1
    }
############################################################################################
# track_interface {  }:表示需要检测的网卡
############################################################################################
    mcast_src_ip 172.51.96.105
############################################################################################
# track_interface :表示设置组播的源地址
############################################################################################
    garp_master_delay 6
    virtual_router_id 60
    priority 110
    advert_int 1

    authentication {
    auth_type PASS
    autp_pass 1234
    }

    virtual_ipaddress {
    192.168.0.88/32 brd 192.168.0.88 dev eth0 label eth0:1
    }

    virtual_routes {
    192.168.0.88/32 dev eth1
    }

    track_script {
    check_nginx weight 
    }

    notify_master /etc/keepalived/scripts/state_master.sh
    notify_backup /etc/keepalived/scripts/state_backup.sh
    notify_fault  /etc/keepalived/scripts/state_fault.sh
}
############################################################################################ 
# notify_master :表示当前调度器竞选为master server时需要调用的脚本
# notify_backup :表示当前调度器竞选为backup server时需要调用的脚本
# notify_fault  :表示当前调度器竞选为出现问题时(比如网卡down)需要调用的脚本
# notify_stop   :表示当前调度器竞服务停止时(比如keepalive服务down)需要调用的脚本
############################################################################################

virtual_server 192.168.0.88 80 {
    delay_loop 1
    lb_algo rr
    lb_kind DR
    persistence_timeout 30
    nat_mask 255.255.255.0
    protocol TCP

real_server 192.168.0.235 80 {
    weight 1
    notify_down /etc/keepalived/scripts/rs_state.sh
###########################################################################################
# notify_down: 表示如果后端rs server检测失败时要调用的脚本
# notify_up  : 表示如果后端rs server检测成功时要调用的脚本
###########################################################################################
    HTTP_GET   {
         url {
             path /info.php
             status_code 200
             }
    connect_timeout   3
    nb_get_retry 3
    delay_before_retry 3
    }
  }

real_server 192.168.0.236 80 {
    weight 1
    notify_down /etc/keepalived/scripts/rs_state.sh

    HTTP_GET   {
         url {
             path /info.php
             status_code 200
             }
    connect_timeout   3
    nb_get_retry 3
    delay_before_retry 3
    }
  }

}

2 Lvs_backup_dr配置示例(备调度器)

vim /etc/keepalived/keepalived.conf

内容如下

! Configuration File for keepalived

global_defs {
   notification_email {
       admin@bluemobi.cn
   }
   notification_email_from  lvs_admin@bluemobi.cn
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id DR_BACKUP
}

vrrp_script check_nginx {
   script "/etc/keepalived/scripts/check_nginx.sh"
   interval 3
   weight -5
}

vrrp_instance http {
    state BACKUP
    interface eth0
    lvs sync daemon interface eth0
    dont_track_primary
    nopreempt

    track_interface {
    eth0
    eth1
    }

    mcast_src_ip 172.51.96.119
    garp_master_delay 6
    virtual_router_id 60
    priority 109
    advert_int 1

    authentication {
    auth_type PASS
    autp_pass 1234
    }

    virtual_ipaddress {
    192.168.0.88/32 brd 192.168.0.88 dev eth0 label eth0:1
    }

    virtual_routes {
    192.168.0.88/32 dev eth1
    }

    track_script {
    check_nginx 
    }

    notify_master /etc/keepalived/scripts/state_master.sh
    notify_backup /etc/keepalived/scripts/state_backup.sh
    notify_fault  /etc/keepalived/scripts/state_fault.sh
}
                 
virtual_server 192.168.0.88 80 {
    delay_loop 1
    lb_algo rr
    lb_kind DR
    persistence_timeout 30
    nat_mask 255.255.255.0
    protocol TCP

real_server 192.168.0.235 80 {
    weight 1
    notify_down /etc/keepalived/scripts/rs_state.sh

    HTTP_GET   {
         url {
             path /info.php
             status_code 200
             }
    connect_timeout   3
    nb_get_retry 3
    delay_before_retry 3
    }
  }

real_server 192.168.0.236 80 {
    weight 1
    notify_down /etc/keepalived/scripts/rs_state.sh

    HTTP_GET   {
         url {
             path /info.php
             status_code 200
             }
    connect_timeout   3
    nb_get_retry 3
    delay_before_retry 3
    }
  }

3.分别在主调度server和备调度server编写以下脚本,如下:

i 当调度器为切换master server时,记录切换时间日志

vim /etc/keepalived/scripts/state_master.sh 

代码如下:

#!/bin/bash
echo -e  >> $LOGFILE
host=CN-SH-DR01      #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "[Master]" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}  Starting to become master server...." >> $LOGFILE 2>&1

echo "Please run the “ipvsadm -Ln”  check the keepalived state ..." >> $LOGFILE
echo ".........................................................................!">> $LOGFILE
echo >>$LOGFILE

ii 当调度器为切换backup server时,记录切换时间日志

vim /etc/keepalived/scripts/state_backup.sh 

代码如下:

#!/bin/bash
echo -e >> $LOGFILE
host=CN-SH-DR01     #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "[Backup]" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}  Starting to become Backup server...." >> $LOGFILE 2>&1

echo "Please run the “ipvsadm -Ln”  check the state ..." >> $LOGFILE
echo "........................................................................!">> $LOGFILE
echo  >> $LOGFILE

iii  当调度器出现错误时,记录错误时间日志

vim /etc/keepalived/scripts/state_fault.sh 

代码如下:

#!/bin/bash
echo -e >> $LOGFILE
host=CN-SH-DR01      #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "[fault errot ]" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}  is fault error...." >> $LOGFILE 2>&1
echo "Please check the server state ..." >> $LOGFILE
echo "........................................................................!">> $LOGFILE
echo  >> $LOGFILE

服务状态健康监测脚本

,比如nginx和keepalive在同一台server上,当nginx不可用时,

#!/bin/bash
#nginx="/usr/local/nginx/sbin/nginx"

PID=`ps -C nginx --no-heading|wc -l`
if [ "${PID}" = "0" ];
    then
         /etc/init.d/nginx start
    sleep 3
    LOCK=`ps -C nginx --no-heading|wc -l`
    if [ "${LOCK}" = "0" ];
    then
        /etc/init.d/keepalived restart
    fi
fi

1.3 重新启动keepalived服务

# service keepalived restart

后端RS server上配置操作

(1)分别在每个RIP(RIP1,RIP2)上新建一个shell脚本文件,如下操作所示:

vim /etc/init.d/lvs-dr

脚本内容如下

#!/bin/sh
#
# Startup script handle the initialisation of LVS
# chkconfig: - 28 72
# description: Initialise the Linux Virtual Server for DR
#
### BEGIN INIT INFO
# Provides: ipvsadm
# Required-Start: $local_fs $network $named
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: Initialise the Linux Virtual Server
# Description: The Linux Virtual Server is a highly scalable and highly
#   available server built on a cluster of real servers, with the load
#   balancer running on Linux.
# description: start LVS of DR-RIP
LOCK=/var/lock/ipvsadm.lock
VIP=192.168.0.88
. /etc/rc.d/init.d/functions
start() {
     PID=`ifconfig | grep lo:0 | wc -l`
     if [ $PID -ne 0 ];
     then
         echo "The LVS-DR-RIP Server is already running !"
     else
         /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
         /sbin/route add -host $VIP dev lo:0
         echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
         echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
         echo "1" >/proc/sys/net/ipv4/conf/eth1/arp_ignore
         echo "2" >/proc/sys/net/ipv4/conf/eth1/arp_announce
         echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
         echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
         /bin/touch $LOCK
         echo "starting LVS-DR-RIP server is ok !"
     fi
}

stop() {
         /sbin/route del -host $VIP dev lo:0
         /sbin/ifconfig lo:0 down  >/dev/null
         echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
         echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
         echo "0" >/proc/sys/net/ipv4/conf/eth1/arp_ignore
         echo "0" >/proc/sys/net/ipv4/conf/eth1/arp_announce
         echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
         echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
         rm -rf $LOCK
         echo "stopping LVS-DR-RIP server is ok !"
}

status() {
     if [ -e $LOCK ];
     then
        echo "The LVS-DR-RIP Server is already running !"
     else
        echo "The LVS-DR-RIP Server is not running !"
     fi
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart)
        stop
        start
        ;;
  status)
        status
        ;;
  *)
        echo "Usage: $1 {start|stop|restart|status}"
        exit 1
esac
exit 0

注意:关于arp仰制,最好也把RIP上同DIR直连的网卡物理网卡也设置arp仰制,如上eth1所示。

授权并启动该脚本

# chmod 777 /etc/init.d/lvs-dr

# service lvd-dr start

(2)分别在每个RIP上安装http服务,并创建测试页,如下分别为RIP上测试页面:

RIP1(192.168.0.235)上的测试页:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_02

RIP2(192.168.0.236)上的测试页:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_03


三.测试验证:

我们可以通过messages查看vip抢夺情况,如下所示:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_04

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_05

从上面我们可以看到cn-sh-sq-web01由于优先级为110>cn-sh-sq-web02的优先级(109),顺利竞选成为master,这里我们可以在cn-sh-sq-web01上可以观察到vip的存在,如下图所示:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_06

同时日志记录脚本也会记录相关信息:

[root@master-dr ~]# tail -f /var/log/keepalived-state.log 
[Backup]
Wed Mar  9 21:56:25 CST 2016
The CN-SH-DR01  Starting to become Backup server....
Please run the “ipvsadm -Ln”  check the state ...
...............................................................................!

[Master]
Wed Mar  9 21:56:28 CST 2016
The CN-SH-DR01  Starting to become master server....
Please run the “ipvsadm -Ln”  check the keepalived state ...
...............................................................................!

这时我们在CIP上访问http://vip,可以看到后端两台rs sever页面轮询出现如下所示:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_07

我们在maste-dr运行“ipvsadm -Ln -c”可以看到连接情况,:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_08

假如此时master-dr出故障,如网卡down,如下:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_09

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_10

通过“tail -f /var/log/keepalived-state.log”可以查看故障日志记录:

[root@master-dr ~]# tail -f /var/log/keepalived-state.log 
The CN-SH-DR01  Starting to become master server....
Please run the “ipvsadm -Ln”  check the keepalived state ...
...............................................................................!

[fault errot ]
Wed Mar  9 22:11:48 CST 2016
The CN-SH-DR01  is fault error....
Please check the server state ...
...............................................................................!

我们发现vip转移到backup-dr上,此时我们再在客户端上访问,发现依然可以访问,如下:

back-dr访问连接情况:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_11

客户端访问情况:

LVS负载均衡之lvs高可用实例部署(案例篇)_keepalive_12


到这里,整个keepalived+lvs实例部署就完成了


总结:keepalived+lvs实例部署有三种模式:nat模式(最简单)dr模式(应用最广泛)tun模式(适用跨区域,跨机房),这里只是dr模式,更多模式部署请参考lvs应用篇,如下: