一、配置高可用集群的前提:(以两节点的heartbeat为例)

   ⑴时间必须保持同步

   ⑵节点之间必须用名称互相通信

      建议使用/etc/hosts,而不要用DNS

      集群中使用的主机名为`uname -n`表示的主机名;

   ⑶ping node(仅偶数节点才需要)

   ⑷ssh密钥认证进行无障碍通信


二、heartbeat的配置

    程序主配置文件:ha.cf

    认证密钥:authkeys, 其权限必须为组和其它无权访问;

    heartbeat v1的资源配置文件:haresources

      资源配置格式形如:

       node4 192.168.30.100/24/eth0/192.168.30.255 Filesystem::192.168.30.13:/mydata::/mydata::nfs mysqld

        需注意资源定义的顺序应与资源启动的顺序一致

    /usr/share/doc/heartbeat-VERSION 目录中有此三个文件的模板,可将其复制到/etc/ha.d/目录下


注意hearbeat v1和v2的资源管理器有个缺陷,就是不能监控资源的运行状态,例如做httpd高可用的两个节点,如果httpd服务停止,但heartbeat程序运行正常(也就是说对方节点能正常接收到心跳信息),这时是不会进行资源转移的。


三、ha.cf文件部分参数详解

     logfile /var/log/ha-log  #指定heartbaet的日志存放位置

     keepalive 2  #指定心跳信息间隔时间为2秒

     deadtime 30  #指定备用节点在30秒内没有收到主节点的心跳信息后,则立即接管主节点的服务资源

     warntime 10  #指定心跳延迟的时间为十秒。当10秒钟内备份节点不能接收到主节点的心跳信号时,就会往日志中写入一个警告日志,但此时不会切换服务

     initdead 120  #在某些系统上,系统启动或重启之后需要经过一段时间网络才能正常工作,该选项用于解决这种情况产生的时间间隔。取值至少为deadtime的两倍。   

     udpport 694   #694为默认使用的端口号。

     baud 19200  #设置串行通信的波特率       

     #bcast eth0 # Linux  #以广播方式通过eth0传递心跳信息

     #mcast eth0 225.0.0.1 694 1 0   #以组播方式通过eth0传递心跳信息,一般在备用节点不止一台时使用。Bcast、ucast和mcast分别代表广播、单播和多播,是组织心跳的三种方式,任选其一即可。

     #ucast eth0 192.168.1.2  #以单播方式通过eth0传递心跳信息,后面跟的IP地址应为双机对方的IP地址

     auto_failback on  #用来定义当主节点恢复后,是否将服务自动切回,heartbeat的两台主机分别为主节点和备节点。主节点在正常情况下占用资源并运行所有的服务,遇到故障时把资源交给备节点并由备节点运行服务。在该选项设为on的情况下,一旦主节点恢复运行,则自动获取资源并取代备节点,如果该选项设置为off,那么当主节点恢复后,将变为备节点,而原来的备节点成为主节点

     #stonith baytech /etc/ha.d/conf/stonith.baytech

     #watchdog /dev/watchdog  #该选项是可选配置,是通过Heartbeat来监控系统的运行状态。使用该特性,需要在内核中载入"softdog"内核模块,用来生成实际的设备文件,如果系统中没有这个内核模块,就需要指定此模块,重新编译内核。编译完成输入"insmod softdog"加载该模块。然后输入"grep misc /proc/devices"(应为10),输入"cat /proc/misc |grep watchdog"(应为130)。最后,生成设备文件:"mknod /dev/watchdog c 10 130" 。即可使用此功能

     node node1.hello.com node2.hello.com  #要做高可用的节点名称,可以通过命令“uname –n”查看。

     ping 192.168.12.237  #ping节点地址,ping节点选择的越好,HA集群就越强壮,可以选择固定的路由器作为ping节点,但是最好不要选择集群中的成员作为ping节点,ping节点仅仅用来测试网络连接

     ping_group group1 192.168.12.120 192.168.12.237  #ping组

     apiauth pingd  gid=haclient uid=hacluster

     respawn hacluster /usr/local/ha/lib/heartbeat/pingd -m 100 -d 5s

       #该选项为可选配置,列出与heartbeat一起启动和关闭的进程,该进程一般是和heartbeat集成的插件,这些进程遇到故障可以自动重新启动。最常用的进程是pingd,此进程用于检测和监控网卡状态,需要配合ping语句指定的ping node来检测网络的连通性。其中hacluster表示启动pingd进程的身份。

     #下面的配置是关键,也就是激活crm管理,开始使用v2 style格式

       # crm respawn  #还可以使用crm on/yes的写法,但这样写的话,如果后面的cib.xml配置有问题会导致heartbeat直接重启该服务器,所以,测试时建议使用respawn的写法

     #下面是对传输的数据进行压缩,是可选项

       compression  bz2

       compression_threshold  2

      

案例一:基于heartbeat v1配置mysql和httpd的高可用双主模型,二者使用nfs共享数据

  1、实验环境:

      node4: 192.168.30.14,mysql主节点,httpd备节点

      node5: 192.168.30.15, mysql备节点,httpd主节点

      node3: 192.168.30.20,nfs

      node1:192.168.30.10, 作测试用的客户端

      mysql高可用所需的资源:

        ip: 192.168.30.100

        mysqld

        nfs:/mydata

      httpd高可用所需的资源:

        ip: 192.168.30.101

        httpd

        nfs:/web

    注:为方便操作,本例中所有主机上都已清空iptables规则

  2、准备工作

    让节点之间的时间同步,并能使用名称进行无障碍通信

        ntpdate 0.centos.pool.ntp.org

        vim /etc/hosts

        ssh-keygen -t rsa

        ssh-copy-id -i .ssh/id_rsa.pub root@node5

[root@node4 ~]# ntpdate 0.centos.pool.ntp.org   #时间同步
13 Apr 23:08:47 ntpdate[2613]: the NTP socket is in use, exiting
[root@node4 ~]# date
Wed Apr 13 23:09:25 CST 2016
[root@node4 ~]# crontab -e 
*/10 * * * * /usr/sbin/ntpdate 0.centos.pool.ntp.org &> /dev/null
[root@node4 ~]# vim /etc/hosts   #编辑本地hosts文件

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.10 node1 
192.168.30.20 node2
192.168.30.13 node3
192.168.30.14 node4
192.168.30.15 node5

[root@node4 ~]# scp /etc/hosts root@node5:/etc/
The authenticity of host 'node5 (192.168.30.15)' can't be established.
RSA key fingerprint is a3:d3:a0:9d:f0:3b:3e:53:4e:ee:61:87:b9:3a:1c:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node5,192.168.30.15' (RSA) to the list of known hosts.
root@node5's password: 
hosts                                                                                                                                    100%  262     0.3KB/s   00:00    

[root@node4 ~]# ping node5
PING node5 (192.168.30.15) 56(84) bytes of data.
64 bytes from node5 (192.168.30.15): icmp_seq=1 ttl=64 time=0.419 ms
64 bytes from node5 (192.168.30.15): icmp_seq=2 ttl=64 time=0.706 ms
^C
--- node5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1888ms
rtt min/avg/max/mdev = 0.419/0.562/0.706/0.145 ms

[root@node4 ~]# ssh-keygen -t rsa    #生成密钥对
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
bc:b8:e6:78:6d:51:91:30:4d:d4:dd:50:c0:18:f1:28 root@node4
The key's randomart p_w_picpath is:
+--[ RSA 2048]----+
|        o=ooo*o=.|
|         .+ ooo .|
|          E.. .  |
|       .  ..     |
|        S.       |
|       ...       |
|      ....       |
|     .o.o        |
|    .+o.         |
+-----------------+
[root@node4 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node5   #将公钥信息导入对方节点的认证文件中
The authenticity of host 'node5 (192.168.30.15)' can't be established.
RSA key fingerprint is a3:d3:a0:9d:f0:3b:3e:53:4e:ee:61:87:b9:3a:1c:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node5,192.168.30.15' (RSA) to the list of known hosts.
root@node5's password: 
Now try logging into the machine, with "ssh 'root@node5'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@node4 ~]# ssh root@node5 hostname    #连接对方节点不需要输入密码了
node5
#在另一个节点上执行同样的步骤
[root@node4 ~]# ntpdate 0.centos.pool.ntp.org
...
[root@node4 ~]# crontab -e 
*/10 * * * * /usr/sbin/ntpdate 202.120.2.101
[root@node5 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.10 node1 
192.168.30.20 node2 
192.168.30.13 node3 
192.168.30.14 node4 
192.168.30.15 node5
[root@node5 ~]# ssh-keygen -t rsa
...
[root@node5 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node4
...
[root@node5 ~]# ssh root@node4 hostname
node4

    提供一个nfs服务器,共享两个目录,一个给mysql服务,一个给httpd服务

       vim /etc/exports
         /mydata 192.168.30.0/24(rw,no_root_squash)   #mysql需要以root用户身份执行初始化,故应该添加no_root_squash选项,初始化完毕后删除即可
         /web 192.168.30.0/24(rw)

[root@node3 ~]# mkdir -p /mydata/{data,binlogs} /web
[root@node3 ~]# vim /web/index.html

hello
[root@node3 ~]# ls /mydata
binlogs  data
[root@node3 ~]# useradd -r mysql
[root@node3 ~]# id mysql
uid=27(mysql) gid=27(mysql) groups=27(mysql)
[root@node3 ~]# useradd -r apache
[root@node3 ~]# id apache
uid=48(apache) gid=48(apache) groups=48(apache)
[root@node3 ~]# chown -R mysql.mysql /mydata
[root@node3 ~]# setfacl -R -m u:apache:rwx /web   #注意开放权限
[root@node3 ~]# vim /etc/exports

/mydata 192.168.30.0/24(rw,no_root_squash)
/web 192.168.30.0/24(rw)

[root@node3 ~]# service rpcbind status
rpcbind (pid  1337) is running...
[root@node3 ~]# service nfs start   #启动nfs服务
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

    在两个节点上安装好要做高可用的服务程序

       chkconfig mysqld off   #注意,欲做高可用的服务,要禁止其开机自动启动

[root@node4 ~]# useradd -u 27 -r mysql
[root@node4 ~]# useradd -u 48 -r apache

[root@node4 ~]# yum -y install mysql-server httpd
...
[root@node4 ~]# chkconfig mysqld off   #注意,欲做高可用的服务,要禁止其开机自动启动
[root@node4 ~]# chkconfig httpd off
[root@node4 ~]# vim /etc/my.cnf 

[mysqld]
datadir=/mydata/data
socket=/var/lib/mysql/mysql.sock
user=mysql
log-bin=/mydata/binlogs/mysql-bin
innodb_file_per_table=ON
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
skip-name-resolve   #禁止mysql进行反向名称解析

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[root@node4 ~]# scp /etc/my.cnf root@node5:/etc/   #节点间服务的配置文件要保存一致
my.cnf                                    100%  308     0.3KB/s
[root@node4 ~]# mkdir /mydata
[root@node4 ~]# showmount -e 192.168.30.13
Export list for 192.168.30.13:
/web    192.168.30.0/24
/mydata 192.168.30.0/24
[root@node4 ~]# mount -t nfs 192.168.30.13:/mydata /mydata

[root@node4 ~]# service mysqld start   #首次启动,进行初始化
Initializing MySQL database:  Installing MySQL system tables...
OK
Filling help tables...
OK

[root@node4 ha.d]# mysql
...
mysql> grant all on *.* to root@'192.168.30.%' identified by 'hello';
Query OK, 0 rows affected (0.04 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)

mysql> \q
Bye
[root@node4 ~]# cd /mydata
[root@node4 mydata]# ls
binlogs  data
[root@node4 mydata]# ls data
ibdata1  ib_logfile0  ib_logfile1  mysql  test
[root@node4 mydata]# ls binlogs
mysql-bin.000001  mysql-bin.000002  mysql-bin.000003  mysql-bin.index
[root@node4 mydata]# cd
[root@node4 ~]# service mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@node4 ~]# umount /mydata

#另一节点执行类似步骤,只是不需要再次执行mysql初始化

     当mysql初始化完毕后,可将nfs服务器上的no_root_squash选项去掉    

[root@node3 ~]# vim /etc/exports

/mydata 192.168.30.0/24(rw)
/web 192.168.30.0/24(rw)

    在每个节点上安装heartbeat并配置好资源

        本例中安装heartbeat v2,v2兼容v1,可使用haresources作为配置接口。

      说明:

        ①heartbeat-pils不要使用yum安装,否则会被自动更新成cluter-glue,而cluster-glue跟heartbeat v2不兼容

        ②/usr/share/doc/heartbeat-VERSION 目录中有ha.cf、haresources和authkey的模板,可将其复制到/etc/ha.d/目录下:

          cp /usr/share/doc/heartbeat-2.1.4/{authkeys,ha.cf,haresources} /etc/ha.d/

          ...

          scp -p authkeys ha.cf haresources root@node5:/etc/ha.d/
authkeys  #节点间的配置文件、消息认证文件,资源配置文件要保持一致

        ③/etc/ha.d/resource.d目录中是一些资源代理

            IPADDR:使用ifconfig命令配置ip

            IPADDR2:使用ip addr命令配置ip(需要使用ip addr show命令查看)

/usr/lib64/heartbeat目录中是一些功能脚本

            hb_standby:把当前节点变成备节点

            hb_takeover:接管资源

            ha_propagate:将ha.cf和authkeys复制到其它节点,会自动保持权限

            send_arp:任何时候把地址夺过来就要通知前端路由更新arp缓存

            haresources2cib.py:将haresources转换成cib格式,输出至/var/lib/heartbeat/crm/

[root@node4 ~]# rpm -ivh heartbeat-pils-2.1.4-12.el6.x86_64.rpm
...
[root@node4 ~]# yum -y install PyXML libnet perl-TimeDate
...
[root@node4 ~]# rpm -ivh heartbeat-stonith-2.1.4-12.el6.x86_64.rpm heartbeat-2.1.4-12.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:heartbeat-stonith      ########################################### [ 50%]
   2:heartbeat              ########################################### [100%]

[root@node4 ~]# ls /usr/share/doc/heartbeat-2.1.4/
apphbd.cf  ChangeLog     DirectoryMap.txt  GettingStarted.html  HardwareGuide.html  hb_report.html      heartbeat_api.txt  Requirements.html  rsync.txt
authkeys   COPYING       faqntips.html     GettingStarted.txt   HardwareGuide.txt   hb_report.txt       logd.cf            Requirements.txt   startstop
AUTHORS    COPYING.LGPL  faqntips.txt      ha.cf                haresources         heartbeat_api.html  README             rsync.html
[root@node4 ~]# cp /usr/share/doc/heartbeat-2.1.4/{authkeys,ha.cf,haresources} /etc/ha.d/

[root@node4 ~]# cd /etc/ha.d
[root@node4 ha.d]# ls
authkeys  ha.cf  harc  haresources  rc.d  README.config  resource.d  shellfuncs
[root@node4 ha.d]# ls resource.d/
apache        db2    Filesystem    ICP  IPaddr   IPsrcaddr  LinuxSCSI  LVSSyncDaemonSwap  OCF        Raid1    ServeRAID  WinPopup
AudibleAlarm  Delay  hto-mapfuncs  ids  IPaddr2  IPv6addr   LVM        MailTo             portblock  SendArp  WAS        Xinetd

[root@node4 ha.d]# vim ha.cf   #编辑配置文件,设置如下几项,其它采用默认设置即可
...
logfile /var/log/ha-log   #同时关闭logfacility    local0
...
auto_failback on
mcast eth0 225.1.1.1 694 1 0
node node4 node5
ping 192.168.30.2
...
[root@node4 ha.d]# openssl rand -hex 10
392fa6f47a05ed67a0f7
[root@node4 ha.d]# vim authkeys
...
auth 1
1 sha1 392fa6f47a05ed67a0f7   #指定加密算法,将生成的随机串附于加密算法之后

[root@node4 ha.d]# chmod 600 authkeys 
[root@node4 ha.d]# vim haresources   #配置资源
...
node4 192.168.30.100/24/eth0/192.168.30.255 Filesystem::192.168.30.13:/mydata::/mydata::nfs mysqld
node5 192.168.30.101/24/eth0/192.168.30.255 Filesystem::192.168.30.13:/web::/var/www/html::nfs httpd

[root@node4 ha.d]# scp -p authkeys ha.cf haresources root@node5:/etc/ha.d/   #节点间的配置文件、消息认证文件,资源配置文件要保持一致
authkeys                                 100%  680     0.7KB/s   00:00    
ha.cf                                    100%   10KB  10.3KB/s   00:00    
haresources                                  100% 6105     6.0KB/s   00:00

[root@node4 ha.d]# service heartbeat start;ssh root@node5 'service heartbeat start'   #在两个节点上启动heartbeat
Starting High-Availability services: 
2016/04/14_02:38:06 INFO:  Resource is stopped
2016/04/14_02:38:06 INFO:  Resource is stopped
Done.

Starting High-Availability services: 
2016/04/14_02:38:05 INFO:  Resource is stopped
2016/04/14_02:38:06 INFO:  Resource is stopped
Done.

[root@node4 ha.d]# ifconfig
...
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:32:52:1C  
          inet addr:192.168.30.100  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
...
[root@node4 ha.d]# service mysqld status   #mysqld在node4上已正常启动
mysqld (pid  4094) is running...
[root@node4 ha.d]# service httpd status
httpd is stopped
[root@node5 ~]# ifconfig
...
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:96:45:92  
          inet addr:192.168.30.101  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
...

[root@node5 ~]# service httpd status   #httpd在node5上已正常启动
httpd (pid  3820) is running...
[root@node5 ~]# service mysqld status
mysqld is stopped

    客户端测试:

[root@node1 ~]# curl 192.168.30.101
hello
[root@node1 ~]# mysql -u root -h 192.168.30.100 -p
...
mysql> create database hellodb;
Query OK, 1 row affected (0.03 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hellodb            |
| mysql              |
| test               |
+--------------------+
4 rows in set (0.00 sec)

    模拟资源资源转移

[root@node4 ~]# /usr/lib64/heartbeat/hb_standby   #强制node4成为备节点
2016/04/14_07:07:52 Going standby [all].
[root@node4 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:32:52:1C  
          inet addr:192.168.30.14  Bcast:192.168.30.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe32:521c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:91672 errors:0 dropped:0 overruns:0 frame:0
          TX packets:87427 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:56564639 (53.9 MiB)  TX bytes:38563809 (36.7 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:48 errors:0 dropped:0 overruns:0 frame:0
          TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3120 (3.0 KiB)  TX bytes:3120 (3.0 KiB)
[root@node5 ~]# ifconfig   #mysql服务已转移到node5上运行
...
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:96:45:92  
          inet addr:192.168.30.101  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:96:45:92  
          inet addr:192.168.30.100  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
...
mysql> select version();
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    2
Current database: *** NONE ***

+------------+
| version()  |
+------------+
| 5.1.73-log |
+------------+
1 row in set (0.13 sec)


案例二:基于heartbeat v2配置lvs director的高可用

   相比heartbeat v1的haresources,heartbeat v2的资源管理器crm能更灵活地定义资源运行的倾向性,功能更强大。

   本例中使用crm的GUI配置接口hb_gui进行资源配置

   我们接着案例一的实验环境进行操作,将node4和node5作为lvs director的高可用节点,node1和node2作为后端RS,node3作为RS1和RS2的共享存储。lvs的安装配置此处略(可参考博客http://9124573.blog.51cto.com/9114573/1759997)。

  1、实验环境:

基于heartbeat v1配置mysql和httpd的高可用双主模型_hb_gui

      node4: 192.168.30.14, lvs director高可用节点

      node5: 192.168.30.15, lvs director高可用节点

      node1: 192.168.30.10, RS1

      node2: 192.168.30.20, RS2

      node3: 192.168.30.13, nfs, 作为RS1和RS2的共享存储

    lvs director高可用所需的资源:

      vip: 192.168.30.102

      ipvsadm

  2、准备好后端RS

[root@node1 ~]# mount -t nfs 192.168.30.13:/web /var/www/html
[root@node1 ~]# ls /var/www/html
index.html
hello
[root@node1 ~]# service httpd start
Starting httpd:                                      [ OK ]
#在另一个RS上执行相同步骤

  3、启用crm资源管理器,并使用GUI接口hb_gui配置资源

      service ipvsadm save   #需要以此种方式保存lvs规则,这样资源代理才能以service ipvsadm start/stop的方式控制规则的启动和关闭

      vim /etc/ha.d/ha.cf

        crm on   #启用crm资源管理器;启用crm后,haresources则不再起作用

      rpm -ivh heartbeat-gui-2.1.4-12.el6.x86_64.rpm

      passwd hacluster   #安装heartbeat-gui后会生成一个新用户hacluster,这是GUI客户端用以连接crm时所使用的用户身份,我们欲连接哪个节点做资源配置,就要在该节点上给该用户设置一个密码

      service heartbeat start   #crm会运行为守护进程mgmtd,监听在5560/tcp

      hb_gui &   #启动GUI客户端,连接crm的守护进程配置资源

        生成的资源配置文件为cib.xml,默认位于/var/lib/heartbeat/crm目录下,节点之间会自动同步配置文件

[root@node4 ~]# service heartbeat stop;ssh root@node5 'service heartbeat stop'
[root@node4 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.102:80 rr
  -> 192.168.30.10:80            Route   1      0          0         
  -> 192.168.30.20:80            Route   1      0          0         
[root@node4 ~]# service ipvsadm save
ipvsadm: Saving IPVS table to /etc/sysconfig/ipvsadm:      [  OK  ]
[root@node4 ~]# scp /etc/sysconfig/ipvsadm root@node5:/etc/sysconfig/   #同步节点间的lvs规则文件
ipvsadm
[root@node4 ~]# rpm -ivh heartbeat-gui-2.1.4-12.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:heartbeat-gui          ########################################### [100%]

[root@node4 ~]# tail -1 /etc/passwd   #可以看到,安装hearbeat-gui包后,会生成一个新用户hacluster
hacluster:x:496:493:heartbeat user:/var/lib/heartbeat/cores/hacluster:/sbin/nologin
[root@node4 ~]# passwd hacluster   #给该用户设置一个密码
Changing password for user hacluster.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@node5 ~]# rpm -ivh heartbeat-gui-2.1.4-12.el6.x86_64.rpm   #另外一个节点也要装上heartbeat-gui
Preparing...                ########################################### [100%]
   1:heartbeat-gui          ########################################### [100%]
[root@node4 ~]# vim /etc/ha.d/ha.cf
crm on   #启用资源管理器crm

[root@node4 ~]# scp /etc/ha.d/ha.cf root@node5:/etc/ha.d/   #同步主配置文件
ha.cf                               100%   10KB  10.3KB/s   00:00    


[root@node4 ~]# service heartbeat start;ssh root@node5 'service heartbeat start'   #启动heartbeat
Starting High-Availability services: 
Done.

Starting High-Availability services: 
Done.

[root@node4 ~]# netstat -tuanp   #可以看到,新增了一个监听端口;crm运行于守护进程mgmtd,监听于5560/tcp
...  
tcp        0      0 0.0.0.0:5560           0.0.0.0:*         LISTEN      23134/mgmtd         
udp        0      0 225.1.1.1:694           0.0.0.0:*                   23122/heartbeat: wr 
...
[root@node4 ~]# hb_gui &   #启动GUI客户端,开始配置资源

基于heartbeat v1配置mysql和httpd的高可用双主模型_haresources_02


基于heartbeat v1配置mysql和httpd的高可用双主模型_heartbeat_03

基于heartbeat v1配置mysql和httpd的高可用双主模型_haresources_04

基于heartbeat v1配置mysql和httpd的高可用双主模型_hb_gui_05

[root@node4 ~]# ls /var/lib/heartbeat/crm
cib.xml  cib.xml.last  cib.xml.sig  cib.xml.sig.last
[root@node4 ~]# ifconfig   #ip已配置成功
...
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:32:52:1C  
          inet addr:192.168.30.102  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
...
[root@node4 ~]# ipvsadm   #lvs规则也已加载
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.102:http rr
  -> node1:http                   Route   1      0          0         
  -> node2:http                   Route   1      0          0

[root@node4 ~]# crm_mon   #该命令可监控高可用节点的状态
Last updated: Mon Apr 18 16:17:55 2016
Current DC: node5 (71a77c25-9cc0-4169-9d88-cb224979fa27)
2 Nodes configured.
1 Resources configured.
============

Node: node5 (71a77c25-9cc0-4169-9d88-cb224979fa27): online
Node: node4 (5eb11525-5dcd-4546-8ed1-ad0bc562a8be): online

Resource Group: lvs
    vip (ocf::heartbeat:IPaddr):        Started node4
    ipvsadm     (lsb:ipvsadm):  Started node4
[root@node3 ~]# curl 192.168.30.102    #测试成功
hello


  4、使用上面的ipvsadm作为资源代理无法获知后端RS的健康状态,heartbeat为lvs专门提供了一个给lvs做高可用且具有后端RS健康状态检测功能的包heatbeat-ldirectord

      yum -y install heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm

      cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf /etc/ha.d/

      vim /etc/ha.d/ldirectord.cf   #直接在ldirector的配置文件中就可定义lvs规则和后端RS健康状态检测的方法

      scp /etc/ha.d/ldirectord.cf root@node5:/etc/ha.d/

      配置资源ldirectord时,需添加参数指明其配置文件路径

      使用haresources配置ldirectord可写作:

         node4 192.168.30.14/eth0/192.168.30.0 ldirectord::/etc/ha.d/ldirectord.cf

#在两个节点都安装上heartbeat-ldirectord包
[root@node5 ~]# yum -y install heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm
...
Installed:
  heartbeat-ldirectord.x86_64 0:2.1.4-12.el6
[root@node4 ~]# yum -y install heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm
...
Installed:
  heartbeat-ldirectord.x86_64 0:2.1.4-12.el6
[root@node4 ~]# rpm -ql heartbeat-ldirectord
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/heartbeat-ldirectord-2.1.4
/usr/share/doc/heartbeat-ldirectord-2.1.4/COPYING
/usr/share/doc/heartbeat-ldirectord-2.1.4/README
/usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf   #ldirector的配置文件模板
/usr/share/man/man8/ldirectord.8.gz
[root@node4 ~]# ls /etc/ha.d/resource.d/   #可看到,多了一个资源代理ldirectord
apache        db2    Filesystem    ICP  IPaddr   IPsrcaddr  ldirectord  LVM                MailTo  portblock  SendArp    WAS       Xinetd
AudibleAlarm  Delay  hto-mapfuncs  ids  IPaddr2  IPv6addr   LinuxSCSI   LVSSyncDaemonSwap  OCF     Raid1
[root@node4 ~]# cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf /etc/ha.d/
[root@node4 ~]# vim /etc/ha.d/ldirectord.cf
...
# Global Directives
checktimeout=3
checkinterval=1   #检测的间隔时长
#fallback=127.0.0.1:80
autoreload=yes
#logfile="/var/log/ldirectord.log"
#logfile="local0"
#emailalert="admin@x.y.z"
#emailalertfreq=3600
#emailalertstatus=all
quiescent=yes

# Sample for an http virtual service
virtual=192.168.30.102:80
        real=192.168.30.10:80 gate
        real=192.168.30.20:80 gate
        fallback=127.0.0.1:80 gate
        service=http   #以何种协议探测后端RS
        request="test.html"
        receive="OK"
        scheduler=rr
        #persistent=600
        #netmask=255.255.255.255
...
[root@node4 ~]# scp /etc/ha.d/ldirectord.cf root@node5:/etc/ha.d/   #同步节点间的ldrectord.cf
ldirectord.cf
[root@node4 ~]# service httpd start;ssh root@node5 'service httpd start'   #启动高可用节点上的httpd服务,作为fallback
[root@node3 ~]# vim /web/test.html   #在nfs上提供一个检测用页,文件名及内容与ldirectord.cf中的定义一致

OK
[root@node4 ha.d]# hb_gui &   #启动GUI客户端配置资源
[1] 30759

基于heartbeat v1配置mysql和httpd的高可用双主模型_hb_gui_06

基于heartbeat v1配置mysql和httpd的高可用双主模型_hb_gui_07

[root@node3 ~]# curl 192.168.30.102
hello

    模拟后端RS故障

[root@node1 ~]# service httpd stop
Stopping httpd:                                            [  OK  ]
[root@node2 ~]# service httpd stop
Stopping httpd:                                            [  OK  ]
[root@node4 ~]# ipvsadm -L -n   #可以看到director已能检测到后端RS出现故障,并自动将fallback server的权重调为1
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.102:80 rr
  -> 127.0.0.1:80                 Local   1      0          6         
  -> 192.168.30.10:80             Route   0      0          1         
  -> 192.168.30.20:80             Route   0      0          0
[root@node3 ~]# curl 192.168.30.102   #客户端仍能正常访问
hello