系统环境: rhel6 x86_64 iptables and selinux disabled

主机: 192.168.122.119 server19.example.com

(注:时间需同步)

192.168.122.1 desktop36.example.com

所需的包:drbd-8.4.3.tar.gz


yum仓库配置:

[rhel-source] 
 name=Red Hat Enterprise Linux $releasever - $basearch - Source 
baseurl=ftp://192.168.122.1/pub/yum
 enabled=1 
 gpgcheck=1 
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release 

 [HighAvailability] 
 name=Instructor Server Repository 
baseurl=ftp://192.168.122.1/pub/yum/HighAvailability
 gpgcheck=1 
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release 
 enabled=1 

 [LoadBalancer] 
 name=Instructor Server Repository 
baseurl=ftp://192.168.122.1/pub/yum/LoadBalancer
 gpgcheck=1 
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release 
 enabled=1 

 [ResilientStorage] 
 name=Instructor Server Repository 
baseurl=ftp://192.168.122.1/pub/yum/ResilientStorage
 gpgcheck=1 
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release 
 enabled=1 

 [ScalableFileSystem] 
 name=Instructor Server Repository 
baseurl=ftp://192.168.122.1/pub/yum/ScalableFileSystem
 gpgcheck=1 
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release 
 enabled=1


#配置pacemaker

以下步骤在server19server25上实施:

[root@server19 ~]# yum install corosync pacemaker -y

 

以下步骤在server19server25上实施:

[root@server19 ~]# cd /etc/corosync/
[root@server19 corosync]# corosync-keygen (生成该key需要不断的敲打键盘)
[root@server19 corosync]# cp corosync.conf.example corosync.conf
[root@server19 corosync]# vim corosync.conf 
# Please read the corosync.conf.5 manual page 
 compatibility: whitetank 

 totem { 
 version: 2 
 secauth: off 
 threads: 0 
 interface { 
 ringnumber: 0 
 bindnetaddr: 192.168.122.0 
 mcastaddr: 226.94.1.1 
 mcastport: 5405 
 ttl: 1 
 } 
 } 

 logging { 
 fileline: off 
 to_stderr: yes 
 to_logfile: yes 
 to_syslog: yes 
 logfile: /var/log/cluster/corosync.log 
 debug: off 
 timestamp: on 
 logger_subsys { 
 subsys: AMF 
 debug: off 
 } 
 } 

 amf { 
 mode: disabled 
 } 

service { 
 ver: 0 
 name: pacemaker 
 use_mgmtd: yes 
 } 
 [root@server19 corosync]# scp corosync.conf authkey root@192.168.122.25:/etc/corosync/

 

以下步骤在server19server25上实施:

[root@server19 corosync]# /etc/init.d/corosync start


此时查看日志tail -f /var/log/cluster/corosync.log会有如下错误:

Jul 27 02:31:31 [1461] server19.example.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.

 

解决方法如下:

[root@server19 corosync]# crm(注:redhat6.4后crm这个命令没有集成在pacemaker包中,需要另外安装crmsh)
 crm(live)# configure 
 crm(live)configure# property stonith-enabled=false 
 crm(live)configure# commit 
 crm(live)configure# quit 
[root@server19 corosync]# crm_verify -L(检测配置是否有错误)

 

此时执行crm_mon进入监控页面,若两台主机均处于Online状态说明配置成功.

pacemake高可用配置 pacemaker 配置文件_操作系统

 

 

以下配置只需在任意一台机子上实施,所有配置会自动同步到另一台机子上.

#添加虚拟IP

[root@server19 corosync]# crm
 crm(live)# configure
 crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=192.168.122.178 cidr_netmask=32 op monitor interval=30s 
 crm(live)configure# commit 
 crm(live)configure# quit

pacemake高可用配置 pacemaker 配置文件_git_02

 

 

#忽略法定人数的检查

[root@server19 corosync]# crm
 crm(live)# configure 
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# commit
crm(live)configure# quit


# 添加apache服务

1.以下步骤在server19server25上实施:

[root@server19 corosync]# vim /etc/httpd/conf/httpd.conf
<Location /server-status> 
 SetHandler server-status 
 Order deny,allow 
 Deny from all 
127.0.0.1 
 </Location> 
[root@server19 corosync]# echo `hostname` > /var/www/html/index.html


2.以下步骤在server19server25上实施:

[root@server19 corosync]# crm
crm(live)# configure
crm(live)configure# primitive apache ocf:heartbeat:apache params configfile=/etc/httpd/conf/httpd.conf op monitor interval=1min
crm(live)configure# commit
qcrm(live)configure# quit

此时执行crm_mon有可能会发现vipapache运行在不同的机子上:

pacemake高可用配置 pacemaker 配置文件_操作系统_03

 

 

解决方法(vipapache绑定):

[root@server19 corosync]# crm
crm(live)# configure
 crm(live)configure# colocation apache-with-vip inf: apache vip
 crm(live)configure# commit 
 crm(live)configure# quit

 

pacemake高可用配置 pacemaker 配置文件_pacemake高可用配置_04

此时访问192.168.122.178可访问到server19上的页面


#配置主备

[root@server19 corosync]# crm
crm(live)# configure
crm(live)configure# location master-node apache 10: server19.example.com
 crm(live)configure# commit 
 crm(live)configure# quit


#配置fence

以下步骤在desktop36上实施:

[root@desktop36 ~]# yum list fence*
[root@desktop36 ~]# yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virt-0.2.3-9.el6.x86_64 -y
[root@desktop36 ~]# fence_virtd -c 
Module search path [/usr/lib64/fence-virt]: 

 Available backends: 
 libvirt 0.1 
 Available listeners: 
 multicast 1.1 

 Listener modules are responsible for accepting requests 
 from fencing clients. 
 
 Listener module [multicast]: 

 The multicast listener module is designed for use environments 
 where the guests and hosts may communicate over a network using 
 multicast. 

 The multicast address is the address that a client will use to 
 send fencing requests to fence_virtd. 

 Multicast IP Address [225.0.0.12]: 

 Using ipv4 as family. 

 Multicast IP Port [1229]: 

 Setting a preferred interface causes fence_virtd to listen only 
 on that interface. Normally, it listens on the default network 
 interface. In environments where the virtual machines are 
 using the host machine as a gateway, this *must* be set 
 (typically to virbr0). 
 Set to 'none' for no interface.


virbr0

The key file is the shared key information which is used to 
 authenticate fencing requests. The contents of this file must 
 be distributed to each physical host and virtual machine within 
 a cluster. 

 Key File [/etc/cluster/fence_xvm.key]: 

 Backend modules are responsible for routing requests to 
 the appropriate hypervisor or management layer.


libvirt

The libvirt backend module is designed for single desktops or 
 servers. Do not use in environments where virtual machines 
 may be migrated between hosts. 

 Libvirt URI [qemu:///system]: 

 Configuration complete. 

 === Begin Configuration === 
 backends { 
 libvirt { 
 uri = "qemu:///system"; 
 } 

 } 

 listeners { 
 multicast { 
 interface = "virbr0"; 
 port = "1229"; 
 family = "ipv4"; 
 address = "225.0.0.12"; 
 key_file = "/etc/cluster/fence_xvm.key"; 
 } 

 } 

 fence_virtd { 
 module_path = "/usr/lib64/fence-virt"; 
 backend = "libvirt"; 
 listener = "multicast"; 
 } 

 === End Configuration === 
 Replace /etc/fence_virt.conf with the above [y/N]? y

注:以上设置除“Interface”处填写虚拟机通信接口和Backend module填写libvirt外,其他选项均可回车保持默认。

[root@desktop36 ~]# mkdir /etc/cluster
 [root@desktop36 ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1


以下步骤在server19server25上实施:

[root@server19 corosync]# mkdir /etc/cluster
[root@server19 corosync]# yum install fence-virt-0.2.3-9.el6.x86_64 -y


以下步骤在desktop36上实施:

[root@desktop36 ~]# scp /etc/cluster/fence_xvm.key root@192.168.122.119:/etc/cluster/ 
 [root@desktop36 ~]# scp /etc/cluster/fence_xvm.key root@192.168.122.25:/etc/cluster/ 
 [root@desktop36 ~]# /etc/init.d/fence_virtd start 
 [root@desktop36 ~]# netstat -anuple | grep fence 
udp 0 0 0.0.0.0:1229 0.0.0.0:* 0 823705 6320/fence_virtd

注:可查看到1229端口说明fence_virtd启动成功.

 

以下步骤在server19server25上实施:

[root@server19 corosync]# crm
 crm(live)# configure 
 crm(live)configure# cib new stonith 
 crm(stonith)configure# quit 
 [root@server19 corosync]# crm
 crm(live)# configure 
 crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map="server19.example.com:vm1 server25.example.com:vm2" op monitor interval=30s 
 crm(live)configure# property stonith-enabled=true 
 crm(live)configure# commit 
 crm(live)configure# quit

 

测试:将server19断网或执行echo c > /proc/sysrq-trigger模拟内核崩溃,看服务是否被接管,并且server19断电重启.

pacemake高可用配置 pacemaker 配置文件_开发工具_05

 

 

#配置drbd

分别给server19server25加一块相同大小的虚拟硬盘

以下步骤在server19server25上实施:

[root@server19 kernel]# yum install kernel-devel make -y
[root@server19 kernel]# tar zxf drbd-8.4.3.tar.gz
 [root@server19 kernel]# cd drbd-8.4.3 
 [root@server19 drbd-8.4.3]# ./configure --enable-spec –with-km

此时会出现如下问题:

(1)configure: error: no acceptable C compiler found in $PATH

(2)configure: error: Cannot build utils without flex, either install flex or pass the --without-utils option.

(3)configure: WARNING: No rpmbuild found, building RPM packages is disabled.

(4)configure: WARNING: Cannot build man pages without xsltproc. You may safely ignore this warning when building from a tarball.

(5)configure: WARNING: Cannot update buildtag without git. You may safely ignore this warning when building from a tarball.


解决方法如下:

(1)[root@server19 drbd-8.4.3]# yum install gcc -y

(2)[root@server19 drbd-8.4.3]# yum install flex -y

(3)[root@server19 drbd-8.4.3]# yum install rpm-build -y

(4)[root@server19 drbd-8.4.3]# yum install libxslt -y

(5)[root@server19 drbd-8.4.3]# yum install git -y

[root@server19 kernel]# mkdir -p ~/rpmbuild/SOURCES
 [root@server19 kernel]# cp drbd-8.4.3.tar.gz ~/rpmbuild/SOURCES/ 
 [root@server19 drbd-8.4.3]# rpmbuild -bb drbd.spec 
 [root@server19 drbd-8.4.3]# rpmbuild -bb drbd-km.spec 
 [root@server19 drbd-8.4.3]# cd ~/rpmbuild/RPMS/x86_64/ 
 [root@server19 x86_64]# rpm -ivh *
 [root@server19 x86_64]# scp ~/rpmbuild/RPMS/x86_64/* root@192.168.122.25:/root/kernel/


以下步骤在server25上实施:

[root@server25 kernel]# rpm -ivh *


以下步骤在server19server25上实施:

[root@server19 ~]# fdisk -cu /dev/vda

划分分区(一般只划一个分区),类型为Linux LVM的

[root@server19 ~]# pvcreate /dev/vda1

[root@server19 ~]# vgcreate koenvg /dev/vda1

[root@server19 ~]# lvcreate -L 1G -n koenlv koenvg


以下步骤在server19server25上实施:

[root@server19 drbd.d]# cd /etc/drbd.d/
 [root@server19 drbd.d]# vim drbd.res 
resource koen { 
 meta-disk internal; 
 device /dev/drbd1; 
 syncer { 
 verify-alg sha1; 
 } 
 net { 
 allow-two-primaries; 
 } 
server19.example.com { 
 disk 
/dev/mapper/koenvg-koenlv; 
192.168.122.119:7789; 
 } 
server25.example.com { 
 disk 
/dev/mapper/koenvg-koenlv; 
192.168.122.25:7789; 
 } 
 } 
[root@server19 drbd.d]# scp drbd.res root@192.168.122.25:/etc/drbd.d/

 

以下步骤在server19server25上实施:

[root@server19 drbd.d]# drbdadm create-md koen

[root@server19 drbd.d]# /etc/init.d/drbd start

 

以下步骤在server19上实施:

[root@server19 drbd.d]# drbdsetup /dev/drbd1 primary –force

(此条命令将server19设置成primary节点,并同步数据)

此时可以执行watch cat /proc/drbd 查看同步状态,当同步完成后继续往下配置,创建文件系统.

[root@server19 drbd.d]# mkfs.ext4 /dev/drbd1

[root@server19 drbd.d]# mount /dev/drbd1 /var/www/html/


注意:两台主机上的/dev/drbd1 不能同时挂载,只有状态为 primary ,才能被挂载使 用,而此时另一方的状态为 secondary


测试:在server19上将/dev/drbd1挂在到/var/www/html/,进到/var/www/html/中随意编辑一些文件,然后卸载/dev/drbd1(umount /var/www/html/),执行drbdadm secondary koendrbdadm primary koenserver25设置为主节点,在server25上挂载/dev/drbd1,最后查看/var/www/html/下的内容是否同步,


:拉伸设备

以下步骤在server19server25上实施:

[root@server19 ~]# lvextend -L +1000M /dev/mapper/koenvg-koenlv

 

以下步骤在server19server25上实施:

[root@server19 ~]# drbdadm resize koen

 

以下步骤在primary节点上实施:

[root@server25 ~]# mount /dev/drbd1 /var/www/html/

[root@server25 ~]# resize2fs /dev/drbd1


 

#pacemakerdrbd整合

以下步骤在server19server25上实施:

[root@server19 ~]# crm
 crm(live)# configure
 crm(live)configure# primitive webdata ocf:linbit:drbd params drbd_resource=koen op monitor interval=60s 
 crm(live)configure# ms webdataclone webdata meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true 
 crm(live)configure# primitive webfs ocf:heartbeat:Filesystem params device="/dev/drbd1" directory="/var/www/html" fstype=ext4 
 crm(live)configure# group webgroup vip apache webfs 
 crm(live)configure# colocation apache-on-webdata inf: webgroup webdataclone:Master 
 crm(live)configure# order apache-after-webdata inf: webdataclone:promote webgroup:start 
 crm(live)configure# commit 
 crm(live)configure# quit

pacemake高可用配置 pacemaker 配置文件_开发工具_06

 

附:使用iscsi存储

以下步骤在desktop36上实施:

[root@desktop36 ~]# yum install scsi-target-utils.x86_64 -y 
 [root@desktop36 ~]# vim /etc/tgt/targets.conf
<target iqn.2013-07.com.example:server.target1> 
 backing-store /dev/vg_desktop36/iscsi-test 
 initiator-address 192.168.122.112 
 initiator-address 192.168.122.234 
 </target> 
 [root@desktop36 ~]# /etc/init.d/tgtd start


以下步骤在server19server25上实施:

[root@server19 ~]# iscsiadm -m discovery -t st -p 192.168.122.1

[root@server19 ~]# iscsiadm -m node -l


使用fdisk -cu对iscsi设备进行分区并且格式化.

注:此操作只需要在一个节点上进行即可,另一个节点会自动同步.

 

以下步骤在server19server25上实施:

[root@server19 ~]# crm
crm(live)# configure
op monitor interval=30s
 crm(live)configure# colocation apache-with-iscsi inf: apache iscsi
 crm(live)configure# commit
 crm(live)configure# quit