《Gluster集群搭建-centos7.2环境》

(1)解压安装包:glusterfs-3.8-release.zip

(2)进入安装目录:

执行安装脚本

./install.sh

安装完成!

(3)关闭防火墙(所有节点上执行):

[root@client ~]# service iptables status      // 查看防火墙状态
[root@client ~]# service iptables stop // 简单关闭

(4)关闭SELinux(所有节点上执行):

[root@client ~]# getenforce         // 查看SELiunx的状态
Enforcing
[root@client ~]# setenforce 0 // 设置当前为关闭
[root@client ~]# getenforce // 查看一下状态
Permissive

[root@client ~]# vi /etc/selinux/config // 设置开机不启动
[root@client ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

[root@client ~]#

(5)设置ip主机名映射文件:
[root@localhost ~]# vi /etc/hosts // 编辑hosts文件夹
将以下内容填写到配置文件中:

192.168.220.135 client
192.168.220.136 server01
192.168.220.137 server02
192.168.220.138 server03
192.168.220.139 server04

当然,这个是根据你自己的虚拟机IP地址确定的配置信息。

将hosts文件拷贝到其他机器中,记得输入yes和密码

[root@localhost ~]# scp /etc/hosts root@server01:/etc/hosts
The authenticity of host 'server01 (192.168.220.136)' can't be established.
RSA key fingerprint is 1b:f9:32:57:5a:df:9c:f5:58:e1:cd:1f:2c:9f:07:75.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server01,192.168.220.136' (RSA) to the list of known hosts.
root@server01's password:
hosts 100% 282 0.3KB/s 00:00
[root@localhost ~]#

同理将/etc/hosts拷贝至其它节点:

scp /etc/hosts root@server02:/etc/hosts
scp /etc/hosts root@server03:/etc/hosts
scp /etc/hosts root@server04:/etc/hosts

(6)设置开机自启(每个节点上):
启动Gluster服务

[root@localhost glusterfs-3.4.2]# /etc/init.d/glusterd status    //  查看服务是否启动
glusterd is stopped
[root@localhost glusterfs-3.4.2]# /etc/init.d/glusterd start // 启动服务
Starting glusterd: [ OK ]
[root@localhost glusterfs-3.4.2]#
[root@localhost glusterfs-3.4.2]# chkconfig glusterd on // 设置服务开机自动启动
[root@localhost glusterfs-3.4.2]#

(7)加入集群节点(在server1上执行)

[root@server01 ~]# gluster peer probe server02
peer probe: success
[root@server01 ~]# gluster peer probe server03
peer probe: success
[root@server01 ~]# gluster peer probe server04
peer probe: success

(7.1)查看集群中加入的节点:

[root@server01 ~]# gluster peer status
Number of Peers: 3

Hostname: server02
Port: 24007
Uuid: e3b697dd-428d-4c15-85b5-0bbd3dba1ef6
State: Peer in Cluster (Connected)

Hostname: server03
Port: 24007
Uuid: 0d61b1c9-9e98-4fec-b9ef-4add193009f6
State: Peer in Cluster (Connected)

Hostname: server04
Port: 24007
Uuid: 57144cf3-2d3a-437d-9391-293390a85d9a
State: Peer in Cluster (Connected)
[root@server01 ~]#

显示全部连接成功!

(8) 创建一个分布式卷
首先在四台机器上创建一个能够存放文件系统的文件夹

[root@server01 ~]# mkdir /Data
[root@server02 ~]# mkdir /Data
[root@server03 ~]# mkdir /Data
[root@server04 ~]# mkdir /Data

[root@server01 ~]# gluster volume create dht server01:/Data/dht1 server02:/Data/dht2 server03:/Data/dht3 server04:/Data/dht4 force // 创建一个分布式卷dht
volume create: dht: success: please start the volume to access data
[root@server01 ~]# gluster volume start dht // 启动这个分布式卷
volume start: dht: success
[root@server01 ~]# gluster volume status dht // 查看这个卷的情况
Status of volume: dht
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick server01:/Data/dht1 49152 Y 1610
Brick server02:/Data/dht2 49152 Y 1554
Brick server03:/Data/dht3 49152 Y 1532
Brick server04:/Data/dht4 49152 Y 1549
NFS Server on localhost 2049 Y 1620
NFS Server on server02 2049 Y 1564
NFS Server on server03 2049 Y 1542
NFS Server on server04 2049 Y 1559

There are no active volume tasks
[root@server01 ~]#

(9) 挂载分布式卷(在任意节点上执行)

[root@server01 ~]# mount -t glusterfs server01:/dht /mnt/   // 挂在server01卷到server01的mnt路径下
[root@server01 ~]#
[root@server01 ~]# df –h // 查看磁盘大小
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
18G 1003M 16G 7% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 477M 52M 400M 12% /boot
server01:/dht 69G 4.0G 62G 7% /mnt
[root@server01 ~]#

(10) 测试卷

[root@server01 mnt]# touch file{001..010}  // 创建十个文件
[root@server01 mnt]# ls
file001 file002 file003 file004 file005 file006 file007 file008 file009 file010
[root@server01 mnt]#

在集群机器上查看文件分布情况:

[root@server01 Data]# cd dht1/
[root@server01 dht1]# ls
file002 file004 file005 file008 file009
[root@server01 dht1]#

[root@server02 Data]# cd dht2/
[root@server02 dht2]# ls
file010
[root@server02 dht2]#

[root@server03 Data]# cd dht3/
[root@server03 dht3]# ls
file003 file006
[root@server03 dht3]#

[root@server04 Data]# cd dht4/
[root@server04 dht4]# ls
file001 file007
[root@server04 dht4]#

此博客是之前的文档,翻阅出来。