需要三台纯净的虚拟机,server3作为M端
server1:172.25.45.1
server2:172.25.45.2
server3:172.25.45.3
1.虚拟机的配置
【server1,server2】
1.永久关掉火墙
/etc/init.d/iptables stop chkconfig iptables off
2.重新配置yum源
vim /etc/yum.repos.d/dvd.repo yum repolist
dvd.repo
# repos on instructor for classroom use
# Main rhel6.5 server
[base]
name=Instructor Server Repository
baseurl=http://172.25.254.19/rhel6.5
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# HighAvailability rhel6.5
[HighAvailability]
name=Instructor HighAvailability Repository
baseurl=http://172.25.254.19/rhel6.5/HighAvailability
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# LoadBalancer packages
[LoadBalancer]
name=Instructor LoadBalancer Repository
baseurl=http://172.25.254.19/rhel6.5/LoadBalancer
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# ResilientStorage
[ResilientStorage]
name=Instructor ResilientStorage Repository
baseurl=http://172.25.254.19/rhel6.5/ResilientStorage
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# ScalableFileSystem
[ScalableFileSystem]
name=Instructor ScalableFileSystem Repository
baseurl=http://172.25.254.19/rhel6.5/ScalableFileSystem
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
【server3】
(1)重新配置yum源
vim dvd.repo ##内容同上 yum repolist
(2)配置/etc/hosts文件(物理机也需要)
vim /etc/hosts ##内容同上
2.安装ricci
【server1,server2】
yum install -y ricci echo westos | passwd --stdin ricci chkconfig ricci on /etc/init.d/ricci start
【server3】
yum install httpd /etc/init.d/httpd start
yum install -y luci
/etc/init.d/luci start
访问https://server3.example.com:8084
https://server3.example.com:8084
图(登陆账户root,密码为server3即M端密码)
右上角 Perferences
此为目前登陆的用户。要添加用户,需要以该用户身份先尝试登陆一次,记录它的登陆信息,管理员再给它权限来进行登录。
Manage Clusters - Create
clustat
如果过程中重复创建会报错
解决方法:
rm -rf /etc/cluster/cluster.conf
重启服务
运行时启动了以下进程
2.fence
【物理机】
yum search fence
fence-virtd.x86_64 : Daemon which handles requests from fence-virt
fence-virtd-libvirt.x86_64 : Libvirt backend for fence-virtd **
fence-virtd-multicast.x86_64 : Multicast listener for fence-virtd **
fence-virtd-serial.x86_64 : Serial VMChannel listener for fence-virtd
fence-virt.x86_64 : A pluggable fencing framework for virtual machines **
大概要下有**标志的这三个。。。待验证
yum install fence-virt.x86_64 fence-virtd-multicast.x86_64 fence-virtd-libvirt.x86_64
cat /etc/fence_virt.conf
fence_virtd -c
cd /etc/cluster(没有建立 mkdir /etc/cluster) dd if=/dev/urandom of=fence_xvm.key bs=128 count=1 (纯净则可忽略)
file fence_xvm.key
systemctl restart fence_virtd netstat -anulp | grep :1229
scp /etc/cluster/fence_xvm.key root@172.25.19.1:/etc/cluster/ scp /etc/cluster/fence_xvm.key root@172.25.19.2:/etc/cluster/
virsh list
【server1/server2】
clustat
使主机名与虚拟机名匹配。因为网页只看到主机名,物理机上只显示虚拟机名,不能识别,所以可以通过UUID来绑定,使主机名与虚拟机名匹配。
server1 UUID:9c69bfbd-0368-45da-86ad-f358c86c1cb8
server2 UUID:6c2fba2e-8723-4650-bd13-96bf16e69887
Fence Devices - Add
Nodes - server1.example.com - Add Fence Method
Add Fence Instance
依此类推,对server2进行同样操作
测试:
【server1】
fence_node server2.example.com ##远程断电server2
3.webfail
【server1/server2】
yum install httpd echo server1.example.com > /var/www/html/index.html echo server2.example.com > /var/www/html/index.html
/etc/init.d/httpd start
网页
测试:
(1)此时在server1上
clusvcadm -r apache -m server2.example.com ##访问server2
(2)
/etc/init.d/network stop ##关掉server1的网络服务
server1挂掉,自动重启,此时浮动IP在server2上
(3)
echo c > /proc/sysrq-trigger ##使server2的内核崩溃
server2挂掉,自动重启,此时浮动IP在server1上
4.
【server3】
将内容扩为1024M
添加一个虚拟硬盘
yum install scsi*
[root@server3 ~]# rpm -qa | grep scsi scsi-target-utils-1.0.24-10.el6.x86_64
vim /etc/tgt/targets.conf ##去掉注释,将38行到40行的内容修改为以下: 38 <target iqn.2016-06.com.example:server.disk> 39 backing-store /dev/vdb 40 initiator-address 172.25.19.1 41 initiator-address 172.25.19.2 42 </target>
/etc/init.d/tgtd start tgt-admin -s
【server1】
yum install -y iscsi-*
iscsiadm -t st -m discovery -p 172.25.19.3 iscsiadm -m node -l
fdisk -l
建立一个逻辑卷
fdisk -cu /dev/sda
【server2】
yum install -y iscsi-* [root@server1 ~]# chkconfig iscsi --list iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off
iscsiadm -t st -m discovery -p 172.25.19.3 iscsiadm -m node -l
fdisk -l
/dev/sda 被同步过来
cat /proc/partitions
若没有被同步过来,可使用partx -a /dev/sda 进行同步
若出现以下情况
[root@server2 ~]# partx -a /dev/sda
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
则使用可以 partprobe 进行同步
5.分布式
【server1】(任意都可以)
pvcreate /dev/sda1 pvs
vgcreate clustervg /dev/sda1[object Object] vgs
查看server2是否同步
lvcreate -L 2G -n lv1 clustervg lvs
查看server2是否同步
mkfs.ext4 /dev/clustervg/lv1 ##两台虚拟机都可以成功挂载
测试:
1.挂载
在server1和server2上都挂载在/mnt下,server1在/mnt下建立一个文件,server2不能看到,只有当server2重新挂载,才可见
2.网页
增加资源-文件系统
disabe apache
删掉scripts,添加Filesystem和Script
clustat clusvcadm -e apache clustat
cd /var/www/html/ echo www.westos.com > index.html
测试:
1)
echo c > /proc/sysrq-trigger ##原来在server1上
clustat
df
2)
/etc/init.d/httpd stop
watch -n 1 clustat ##监控另一台,状态:recoverable-starting-started
6.集群式
clusvcadm -d apache
Service Groups
删掉Filesystem和Script
Resources
删掉webdata
lvremove /dev/clustervg/lv1 lvs ##边查看server2 lvcreate -L 2G -n demo clustervg lvs ##边查看server2
mkfs.gfs2 -p lock_dlm -t wjl_ha:mygfs2 -j 3 /dev/clustervg/demo ##3为节点数加1
gfs2_tool sb /dev/clustervg/demo all
测试:
在server1和server2上都挂载在/mnt下,server1在/mnt下建立一个文件,server2可以看见并操作
gfs2_tool journals /dev/clustervg/demo
永久挂载
【server1/server2】
umount /mnt
在server1和server2上都挂载在/var/www/html下
vim /etc/fstab
UUID="d287c031-bb4a-013a-3d29-ddd651a4b168" /var/www/html gfs2 _netdev 0 0
增加资源-GFS2,在Service groups添加GFS2和Script
clusvcadm -e apache echo www.westos.com > /var/www/html/index.html
网页 显示ww.westos.com
在另一台虚拟机上挂载,挂载后修改index.html(如改www.westos.org),刷新页面后,页面显示www.westos.org
增加日志
gfs2_jadd -j 2 /dev/clustervg/demo
lvextend -l +511 /dev/clustervg/demo gfs2_grow /dev/clustervg/demo
注意:
1.如果同步不上,可能是因为时间不同步。
2.页面没有显示,可能是M端luci服务关闭
[root@server3 ~]# /etc/init.d/luci status
No PID file /var/run/luci/luci.pid
[root@server3 ~]# /etc/init.d/luci start
Starting saslauthd: [ OK ]
Start luci... [ OK ]
Point your web browser to https://server3.example.com:8084 (or equivalent) to access luci
[root@server3 ~]#