主机环境 redhat6.5 64位

实验环境 服务端1 ip172.25.25.113   主机名:server3.example.com

        服务端2 ip 172.25.25.114   主机名:server4.example.com 

        调度端2 ip 172.25.25.112   主机名:server2.example.com

        调度端1 ip 172.25.25.111   主机名:server1.example.com

防火墙状态:关闭

虚拟ip(vip): 172.25.25.200/24

 

1.LVS之DR的配置及测试

1.添加vip、将调度策略写进内核(调度器端)

  1.添加vip

[root@server2 ~]# ip addr add 172.25.25.200/24 dev eth0    #添加一个vip

[root@server2 ~]# ip addr show          #查看

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueuestate UNKNOWN

    link/loopback 00:00:00:00:00:00brd 00:00:00:00:00:00

    inet 127.0.0.1/8scope host lo

    inet6 ::1/128 scopehost

       valid_lft foreverpreferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

    link/ether 52:54:00:85:1a:3bbrd ff:ff:ff:ff:ff:ff

    inet 172.25.25.112/24brd 172.25.25.255 scope global eth0

    inet 172.25.25.200/24scope global secondary eth0       #添加成功

    inet6fe80::5054:ff:fe85:1a3b/64 scope link

       valid_lft foreverpreferred_lft forever

 2.用ipvsadm工具将策略写进内核

[root@server2 ~]# yum install ipvsadm -y            #安装ipvsadm

[root@server2 ~]# ipvsadm -A -t 172.25.25.200:80 -s rr      #写策略,添加httpd服务,机制轮叫

[root@server2 ~]# ipvsadm -a -t 172.25.25.200:80 -r172.25.25.113:80 -g #添加服务器

[root@server2 ~]# ipvsadm -a -t 172.25.25.200:80 -r172.25.25.114:80 -g

[root@server2 ~]# ipvsadm -ln       #查看,添加成功

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  ->RemoteAddress:Port           ForwardWeight ActiveConn InActConn

TCP  172.25.25.200:80 rr

  ->172.25.25.113:80             Route   1     0          0        

  ->172.25.25.114:80             Route   1     0          0         

2.安装httpd、添加虚拟ip、添加arp火墙策略(服务器端)

  1.写测试页、开启httpd

[root@server3 ~]# yum install -y httpd      #安装httpd

[root@server3 ~]# vim /var/www/html/index.html     #写测试页

server3.example.com

[root@server3 ~]# /etc/init.d/httpd start       #开启httpd

Starting httpd:                                            [  OK  ]

 2.用arptables工具添加火墙策略

[root@server3 ~]# yum install arptables_jf -y   #安装arptables_jf工具

[root@server3 ~]# ip addr add 172.25.25.200/24 dev eth0 #添加虚拟ip

#添加arp火墙策略(禁止从172.25.25.200进来的访问,允许从72.25.25.200出去的访问且经出去的ip变成自己的ip172.25.25.113)

[root@server3 ~]# arptables -A IN -d 172.25.25.200 -j DROP

[root@server3 ~]# arptables -A OUT -s 172.25.25.200 -j mangle--mangle-ip-s 172.25.25.113

[root@server3 ~]# /etc/init.d/arptables_jf save     #保存

Saving current rules to /etc/sysconfig/arptables:          [ OK  ]

[root@server3 ~]# arptables -L      #查看

Chain IN (policy ACCEPT)

target     source-ip            destination-ip       source-hw          destination-hw     hlen  op         hrd        pro      

DROP       anywhere             172.25.25.200        anywhere           anywhere           any    any       any        any      

 

Chain OUT (policy ACCEPT)

target     source-ip            destination-ip       source-hw          destination-hw     hlen  op         hrd        pro      

mangle    172.25.25.200        anywhere             anywhere           anywhere           any    any       any        any       --mangle-ip-s server3.example.com

 

Chain FORWARD (policy ACCEPT)

target     source-ip            destination-ip       source-hw          destination-hw     hlen  op         hrd        pro      

3.测试

#刚开始是服务端1

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_模式

#刷新之后,服务端2

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_LVS_02

[root@server3 ~]# /etc/init.d/httpd stop    #将服务端1的httpd停掉

Stopping httpd:                                           [  OK  ]

#服务在服务端2,刷新也没变

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_模式_03

 

为了避免单点故障,要将DR添加到高可用集群(HA)中,下面介绍的是添加到HA的heartbeat中。由于heartheat对后端没有健康检查,这就需要借助ldirectord对heartbeat进行后端检查。即将DR添加到ldirectord中,再将ldirectord服务添加到heartbeat中,前面博文中已经有heartdbeat的安装和测试,可以参考前面的博文。

2.将DR添加到ldirectord服务中,再将ldirectord服务添加到高可用集群(HA)的heartbeat中、测试   

1.配置ldirectord、测试(调度端)

#在管理端1和管理端2安装、配置好heartbeat的情况下

 1.配置ldirectord、查看策略是否写进内核

[root@server2 yum.repos.d]# cd/usr/share/doc/ldirectord-3.9.5/  

[root@server2 ldirectord-3.9.5]# ls

COPYING  ldirectord.cf

[root@server2 ldirectord-3.9.5]# cp ldirectord.cf/etc/ha.d/  #复制

[root@server2 ldirectord-3.9.5]# cd /etc/ha.d/ 

[root@server2 ha.d]# vim ldirectord.cf  #进入配置文件

 25virtual=172.25.25.200:80    #虚拟ip

 26         real=172.25.25.113:80 gate  #真正的服务端

 27         real=172.25.25.114:80 gate  #同上  

 28         fallback=127.0.0.1:80 gate  #当所有服务端都出故障之后,使用本地回环

 29         service=http       #服务httpd

 30         scheduler=rr       #机制轮叫

 31         #persistent=600

 32         #netmask=255.255.255.255

 33         protocol=tcp       #协议

 34         checktype=negotiate

 35         checkport=80       端口

 36         request="index.html"    #服务的测试页名称

 37 #       receive="Test Page"

 38 #       virtualhost=www.x.y.z

 

[root@server2 ha.d]# /etc/init.d/ipvsadm stop       #停掉ipvsadm

ipvsadm: Clearing the current IPVS table:                  [  OK  ]

ipvsadm: Unloading modules:                                [  OK  ]

 

[root@server2 ha.d]# ipvsadm -L   #查看,没有策略

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  ->RemoteAddress:Port           ForwardWeight ActiveConn InActConn

[root@server2 ha.d]# /etc/init.d/ldirectord restart  #开启ldirectord

Restarting ldirectord... success

[root@server2 ha.d]# ipvsadm -L #查看

 

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  ->RemoteAddress:Port           ForwardWeight ActiveConn InActConn

TCP  172.25.25.200:httprr

  ->server3.example.com:http     Route   1     0          0        

  ->172.25.25.114:http           Route   1     0          0         

 2.测试

#刚开始是服务端1

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_DR_04

#刷新之后,服务端2

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_模式_05

#将停掉服务端1和服务端2的httpd,测试

[root@server3 ~]# /etc/init.d/httpd stop       

Stopping httpd:                                           [  OK  ]

[root@sever4 yum.repos.d]# /etc/init.d/httpd stop

Stopping httpd:                                           [  OK  ]

#本地回环

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_模式_06 

2.将ldirectord服务添加到heartbeat中

[root@server2 ha.d]# vim haresources

150 server1.example.com IPaddr::172.25.25.200/24/eth0ldirectord httpd  #添加虚拟ip,调度,httpd服务

[root@server2 ha.d]# /etc/init.d/ldirectord stop        #将ldirectord服停止(不能手动开启)

Stopping ldirectord... Success

[root@server2 ha.d]# ip addr del 172.25.25.200/24 dev   #将vip删掉

[root@server2 ha.d]# /etc/init.d/heartbeat start    #开启heartbeat

Starting High-Availability services: INFO:  Resource is stopped

Done.

 

[root@server2 ha.d]# scp haresources 1ldirectord.cf72.25.25.111:/etc/ha.d/     #将修改过的文件传给服务端1

root@172.25.25.111's password:

haresources                                   100%5972     5.8KB/s   00:00   

ldirectord.cf                                   100%8281     8.1KB/s   00:00   

[root@server1 ha.d]# /etc/init.d/heartbeat start    #开启服务端1的heartbeat

Starting High-Availability services: INFO:  Resource is stopped

Done.

3.整体测试

#将服务开启之后,服务在服务端1(主)

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_LVS_07

[root@server1 ha.d]# ip addr show       #查看ip

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueuestate UNKNOWN

    link/loopback00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8scope host lo

    inet6 ::1/128 scopehost

       valid_lft foreverpreferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

    link/ether 52:54:00:ec:8b:36brd ff:ff:ff:ff:ff:ff

    inet 172.25.25.111/24brd 172.25.25.255 scope global eth0

    inet 172.25.25.200/24brd 172.25.25.255 scope global secondary eth0 #虚拟ip成功

    inet6fe80::5054:ff:feec:8b36/64 scope link

       valid_lft foreverpreferred_lft forever

 

[root@server1 ha.d]# /etc/init.d/heartbeat stop     #若将服务端1的heartbeat停掉

Stopping High-Availability services: Done.         

#测试,服务到了服务端2(备)

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_DR_08

[root@server2 ha.d]# ip addr show   #查看ip,虚拟ip到了服务端2

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueuestate UNKNOWN

    link/loopback00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8scope host lo

    inet6 ::1/128 scopehost

       valid_lft foreverpreferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

    link/ether52:54:00:85:1a:3b brd ff:ff:ff:ff:ff:ff

    inet 172.25.25.112/24brd 172.25.25.255 scope global eth0

    inet 172.25.25.200/24brd 172.25.25.255 scope global secondary eth0#虚拟ip

    inet6fe80::5054:ff:fe85:1a3b/64 scope link

       valid_lft foreverpreferred_lft forever

 

[root@server1 ha.d]# /etc/init.d/heartbeat start    #将服务端1的heartbeat开启

Starting High-Availability services: INFO:  Resource is stopped

Done.

#测试,服务自动回到了服务端1(主)

LVS-DR模式的配置及Heartbeat+Ldirectord+DR高可用负载均衡集群的搭建_LVS_09

[root@server1 ha.d]# ip addr show    #查看ip,虚拟ip自动回到到了服务端1

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueuestate UNKNOWN

    link/loopback00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8scope host lo

    inet6 ::1/128 scopehost

       valid_lft foreverpreferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

    link/ether52:54:00:ec:8b:36 brd ff:ff:ff:ff:ff:ff

    inet 172.25.25.111/24brd 172.25.25.255 scope global eth0

    inet 172.25.25.200/24brd 172.25.25.255 scope global secondary eth0

    inet6fe80::5054:ff:feec:8b36/64 scope link

       valid_lft foreverpreferred_lft forever