利用LVS+Keepalived 实现高性能高可用负载均衡

 

:

F5又太贵,你们又是创业型互联公司如何有效节约成本,节省不必要的浪费?同时实现商业硬件一样的高性能高可用的功能?有什么好的负载均衡可伸张可扩展的方案吗?答案是肯定的!有!我们利用LVS+Keepalived基于完整开源软件的架构可以为你提供一个负载均衡及高可用的服务器。

 

一. LVS+Keepalived 介绍

1. LVS

LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统。本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一。目前有三种IP负载均衡技术(VS/NAT、VS/TUN和VS/DR);

八种调度算法(rr,wrr,lc,wlc,lblc,lblcr,dh,sh)。

 

2. Keepalvied
在这里主要用作RealServer的健康状态检查以及LoadBalance主机和BackUP主机之间failover的实现
 
二. 网站负载均衡拓朴图
 
 
三、配置IP
LVS(master)增加一片网卡:
eth0:172.24.100.6
eth1:202.168.128.101
LVS(backup)增加一片网卡:
eh0:172.24.100.7
eh1:202.168.128.111
对外虚拟IP:202.168.128.202
对内虚拟IP:172.24.100.70
 
四. 安装LVS和Keepalvied软件包
 
     #lsmod | rep ip_vs
#uname -r
显示2.6.18-53.el5PAE
#ln -s /usr/src/kernels/2.6.18-53.el5PAE-i686/  /usr/src/linux
 
#tar zxvf ipvsadm-1.24.tar.gz
#cd ipvsadm-1.24
#make all && make install(安装相关包:gcc*)
#find / -name ipvsadm  # 查看ipvsadm的位置
 
#tar zxvf keepalived-1.1.15.tar.gz
#cd keepalived-1.1.15
#./configure (提示安装openssl*)
# make && make install
#find / -name keepalived  # 查看keepalived位置
    
#cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/
#cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
#mkdir /etc/keepalived
#cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
#cp /usr/local/sbin/keepalived /usr/sbin/
#chkconfig —add keepalived
#chkconfig keepalived on
 
五. 配置LVS
在LVS机上配置(两台都要配置)
vim /usr/local/sbin/lvsdr.sh
#!/bin/bash
VIP=202.168.128.202
RIP1=172.24.100.4
RIP2=172.24.100.5
/etc/rc.d/init.d/functions
case "$1" in
start)
       echo "start LVS of DirectorServer"
       /sbin/ipvsadm -C
       /sbin/ipvsadm -A -t $VIP:80 -s rr
       /sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -m -w 1
       /sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -m -w 1
       /sbin/ipvsadm
;;
stop)
echo "Close LVS Directorserver"
/sbin/ifconfig eth0:1 down
/sbin/ipvsadm -C
;;
*)
echo "Usage0{start|stop}"
exit 1
esac
 
六. 利用Keepalvied实现负载均衡和和高可用性
1. 主LVS上:172.24.100.6
global_defs {
   router_id LVS_DEVEL
}
vrrp_sync_group lvs_1 {
           group {
                 VI_1
                 VI_GATEWAY
                   }
}
vrrp_instance VI_1 {
 MASTER
    interface eth1
    virtual_router_id 51
100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
要指定子网掩码
    }
}
vrrp_instance VI_GATEWAY {
MASTER
    interface eth0
    virtual_router_id 52
100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
     }
     virtual_ipaddress {
          172.24.100.70
     }
}
virtual_server 202.168.128.202 80 {
     delay_loop 6
     lb_algo rr
     lb_kind NAT
     nat_mask 255.255.0.0
    persistence_timeout 50
    protocol TCP
       real_server 172.24.100.4 80 {
              weight 1
                TCP_CHECK {
                   connect_timeout 3
                   nb_get_retry 3
                   delay_before_retry 3
              }
         }
         real_server 172.24.100.5 80 {
              weight 1
                TCP_CHECK {
                   connect_timeout 3
                   nb_get_retry 3
                   delay_before_retry 3
              }
     }
}
      
2.从LVS:172.24.100.7
global_defs {
   router_id LVS_DEVEL
}
vrrp_sync_group lvs_1 {
           group {
                 VI_1
                 VI_GATEWAY
                   }
}
vrrp_instance VI_1 {
BACKUP
    interface eth1
    virtual_router_id 51
90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        202.168.128.202
    }
}
vrrp_instance VI_GATEWAY {
BACKUP
    interface eth0
    virtual_router_id 52
90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.24.100.70
    }
}
virtual_server 202.168.128.202 80 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    nat_mask 255.255.0.0
    persistence_timeout 50
    protocol TCP
    real_server 172.24.100.4 80 {
        weight 1
          TCP_CHECK {
             connect_timeout 3
             nb_get_retry 3
             delay_before_retry 3
        }
    }
    real_server 172.24.100.5 80 {
        weight 1
          TCP_CHECK {
             connect_timeout 3
             nb_get_retry 3
             delay_before_retry 3
        }
  }
}
 
3.开启IP转发
    net.ipv4.ip_forward=1
 
七、配置WEB端(172.24.100.4和172.24.100.5)
1. 配置IP转发功能
#vim /etc/sysctl.conf
    net.ipv4.ip_forward=1
    net.ipv4.conf.lo.arp_ignore = 1
 net.ipv4.conf.lo.arp_announce = 2
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 22.  配置网关
#vim /etc/sysconfig/network-scripts/ifcfg-eth0添加
=172.24.100.70
#service network restart
 
八、启动服务
# /usr/local/sbin/lvsdr.sh start (把这条语句写到/etc/rc.local中,开机启动)
#/etc/init.d/keepalived start  启动keepalived 服务,keepalived就能利用keepalived.conf 配置文件,实现负载均衡和高可用.
 
九、测试
1. # ip addr show
eth0:  <BROADCAST,MULTICAST,UP,LOWER_UP>    mtu   1500  qdisc pfifo_fast qlen 1000
        link/ether 00:0c:29:9d:db:59 brd ff:ff:ff:ff:ff:ff
172.24.100.6/24
172.24.100.70/32
        inet6 fe80::20c:29ff:fe9d:db59/64 scope link
           valid_lft forever preferred_lft forever
  eth1:  <BROADCAST,MULTICAST,UP,LOWER_UP>    mtu   1500  qdisc pfifo_fast qlen 1000
        link/ether 00:0c:29:9d:db:63 brd ff:ff:ff:ff:ff:ff
202.168.128.101/24
202.168.128.202/32
        inet6 fe80::20c:29ff:fe9d:db63/64 scope link
           valid_lft forever preferred_lft forever
    ………
最后把主机宕机,到从机上使用ip addr show 查看,同上则OK。同时还需要用浏览器来测试。
 
2. 查看lvs服务是否正常
  #watch ipvsadm –ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  202.168.128.202:80 wrr persistent 60
  -> 172.24.100.5:80             Route   3      0          0
  -> 172.24.100.4:80             Route   3      0          0
 
监听日志,查看状态。
 
3.将一台web 服务器关闭,然后在LVS 上用ipvsadm 命令查看,关闭的服务器应该从
lvs集群中剔除了,再将关闭的服务器启动起来,用ipvsadm查看,又回来了。
 

   











   











   











   











   











   





 interval 9                            //监控时间  

 weight 1                        //权重值,数值越大权重越高 }  2.  

 track_script {  

 chk_http    //执行监控的服务 }  

    

 启动keepalived服务 # service keepalived start  

 Starting keepalived:                         [  OK  ] 建议使用:  

 # /usr/local/keepalived/sbin/keepalived -D -f /etc/keepalived/keepalived.conf   

 -D 显示在日志记录 -f 指定配置文件目录    

 确认keepalived已启动 # ps -aux|grep keepalived  

 Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.3/FAQ root      5227  0.0  0.2  4896  696 ?        Ss   18:15   0:00 keepalived -D root      5228  0.0  0.4  4948 1276 ?        S    18:15   0:00 keepalived -D root      5229  0.0  0.4  4948 1036 ?        S    18:15   0:00 keepalived -D root      5654  0.0  0.2  3820  664 pts/1    S+   18:19   0:00 grep keepalived    

 设置keepalived随服务器一起启动  

 # echo “/usr/local/keepalived/sbin/keepalived -D -f /etc/keepalived/keepalived.conf” >> /etc/rc.d/rc.local    

 查看LVS_DR_MASTER上eth0接口在启动keepalived前后变化   

 启动keepalived之前,查看LVS_DR_MASTER主服务器 keepalived虚拟IP绑定状况,ifconfig无法查看到 # ip a  

 1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue   

     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 brd 127.255.255.255 scope host lo     inet6 ::1/128 scope host   

        valid_lft forever preferred_lft forever  

 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000     link/ether 00:0c:29:22:3d:17 brd ff:ff:ff:ff:ff:ff  







   





     inet 200.200.200.10/24 brd 200.200.200.255 scope global eth0     inet6 fe80::20c:29ff:fe22:3d17/64 scope link         valid_lft forever preferred_lft forever 3: sit0: <NOARP> mtu 1480 qdisc noop  link/sit 0.0.0.0 brd 0.0.0.0    

 启动keepalived之后 # ip a  

 1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue   

     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 brd 127.255.255.255 scope host lo     inet6 ::1/128 scope host   

        valid_lft forever preferred_lft forever  

 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000     link/ether 00:0c:29:22:3d:17 brd ff:ff:ff:ff:ff:ff  

     inet 200.200.200.10/24 brd 200.200.200.255 scope global eth0     inet 200.200.200.200/32 scope global eth0     inet6 fe80::20c:29ff:fe22:3d17/64 scope link         valid_lft forever preferred_lft forever 3: sit0: <NOARP> mtu 1480 qdisc noop   

 link/sit 0.0.0.0 brd 0.0.0.0  

 可以看到主服务器的200.200.200.200虚拟 IP 已经挂接在网卡 eth0 上。  

    

 可以正常查看到keepalived启动日志  

 # /tail -100 /var/log/messages  

 Dec  8 20:54:42 localhost Keepalived: Starting Keepalived v1.1.20 (12/02,2010)  Dec  8 20:54:42 localhost Keepalived: Starting Healthcheck child process, pid=3894 Dec  8 20:54:42 localhost Keepalived: Starting VRRP child process, pid=3896  

 Dec  8 20:54:42 localhost Keepalived_vrrp: Netlink reflector reports IP 200.200.200.10 added Dec  8 20:54:42 localhost Keepalived_vrrp: Registering Kernel netlink reflector Dec  8 20:54:42 localhost Keepalived_vrrp: Registering Kernel netlink command channel Dec  8 20:54:42 localhost Keepalived_vrrp: Registering gratutious ARP shared channel Dec  8 20:54:43 localhost kernel: IPVS: Registered protocols (TCP, UDP, AH, ESP)  

 Dec  8 20:54:43 localhost kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Dec  8 20:54:44 localhost kernel: IPVS: ipvs loaded.  

 Dec  8 20:54:44 localhost Keepalived_healthcheckers: Netlink reflector reports IP 200.200.200.10 added Dec  8 20:54:44 localhost Keepalived_healthcheckers: Registering Kernel netlink reflector Dec  8 20:54:44 localhost Keepalived_healthcheckers: Registering Kernel netlink command channel Dec  8 20:54:44 localhost Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.   

 Dec  8 20:54:44 localhost Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.  Dec  8 20:54:44 localhost Keepalived_vrrp: Configuration is using : 38035 Bytes Dec  8 20:54:44 localhost Keepalived_vrrp: Using LinkWatch kernel netlink reflector...  







   





 Dec  8 20:54:44 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Entering BACKUP STATE  

 Dec  8 20:54:44 localhost Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)] Dec  8 20:54:44 localhost Keepalived_healthcheckers: Configuration is using : 4811 Bytes Dec  8 20:54:44 localhost Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector... Dec  8 20:54:45 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE Dec  8 20:54:45 localhost udevd[1485]: udev done!  

 Dec  8 20:54:46 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE Dec  8 20:54:46 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.  

 Dec  8 20:54:46 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 200.200.200.200 Dec  8 20:54:46 localhost Keepalived_vrrp: Netlink reflector reports IP 200.200.200.200 added  

 Dec  8 20:54:46 localhost Keepalived_healthcheckers: Netlink reflector reports IP 200.200.200.200 added Dec  8 20:54:48 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Transition to MASTER STATE Dec  8 20:54:49 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Entering MASTER STATE Dec  8 20:54:49 localhost Keepalived_vrrp: VRRP_Instance(VI_2) setting protocol VIPs.  

 Dec  8 20:54:49 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 200.200.200.199 Dec  8 20:54:49 localhost Keepalived_vrrp: Netlink reflector reports IP 200.200.200.199 added  

 Dec  8 20:54:49 localhost Keepalived_healthcheckers: Netlink reflector reports IP 200.200.200.199 added  

 Dec  8 20:54:51 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 200.200.200.200 Dec  8 20:54:53 localhost udevd[1485]: udev done!  

 Dec  8 20:54:54 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 200.200.200.199  

 日志显示,此时地址为200.200.200.200的VI_1在当前服务器上正常运行  

    

 查看LVS_DR_BACKUP辅服务器开启keepalived虚拟IP绑定状况 # ip a  

 1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue   

     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 brd 127.255.255.255 scope host lo     inet6 ::1/128 scope host   

        valid_lft forever preferred_lft forever  

 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000     link/ether 00:0c:29:f0:95:68 brd ff:ff:ff:ff:ff:ff  

     inet 200.200.200.11/24 brd 200.200.200.255 scope global eth0     inet 200.200.200.199/32 scope global eth0     inet6 fe80::20c:29ff:fef0:9568/64 scope link         valid_lft forever preferred_lft forever 3: sit0: <NOARP> mtu 1480 qdisc noop      link/sit 0.0.0.0 brd 0.0.0.0  

 可以看到在LVS_DR_MASTER正常运行的时候200.200.200.200虚拟IP不会挂接在辅服务器网卡 eth0 上。  

     

 验证测试   







   





 1. 当LVS_DR_MASTER、LVS_DR_BACKUP服务器nginx均正常工作时, CLIENT通过浏览器访问  

 http://200.200.200.10 LVS_DR_MASTER 200.200.200.10   

 http://200.200.200.11 LVS_DR_BACKUP 200.200.200.11   

 http://200.200.200.200 LVS_DR_MASTER 200.200.200.10   

 http://200.200.200.199 LVS_DR_BACKUP 200.200.200.11   2.  

 当LVS_DR_MASTER服务器nginx出现故障,LVS_DR_BACKUP正常工作时,CLIENT通过浏览器访问 http://200.200.200.10 无法访问   

 http://200.200.200.11 LVS_DR_BACKUP 200.200.200.11   

 http://200.200.200.200 LVS_DR_BACKUP 200.200.200.11   

 http://200.200.200.199 LVS_DR_BACKUP 200.200.200.11   3.  

 当LVS_DR_MASTER正常工作时,LVS_DR_BACKUP服务器nginx出现故障,CLIENT通过浏览器访问 http://200.200.200.10 LVS_DR_MASTER 200.200.200.10   

 http://200.200.200.11 无法访问   

 http://200.200.200.200 LVS_DR_BACKUP 200.200.200.10   

 http://200.200.200.199 LVS_DR_BACKUP 200.200.200.10   4.  

 当LVS_DR_MASTER、LVS_DR_BACKUP服务器nginx均出现故障时,CLIENT通过浏览器访问 http://200.200.200.10