RAID磁盘阵列

在Linux系统中做RAID,磁盘阵列的设备可以是一块磁盘中的三个以上的分区,也可以是三块以上的磁盘,本文主要参几块磁盘为例,来实现RAID5

实验说明:

在VMware中进行,系统中有一块磁盘sda,新添加6块SCSI磁盘,分别为sdb,sdc ,sdd,sde,sdf,sdg,其中4块作为磁盘阵列的设备,1块作为预备(spare)磁盘,还有1块留作备用。

实验步骤:

第一步:先查看一下系统中磁盘设备:fdisk –l   (显示结果略)

第二步:接下来开始创建RAID了,主要会用到mdadm命令,使用此命令需要先安装RHEL6安装光盘中自带的mdadm包,如果没有安装,先进行安装。

[root@localhost ~]# mdadm --create --auto=yes /dev/md0--level=5 --raid-devices=4 --spare-devices=1 /dev/sd[b-f]

参数说明:

--create   //表示要创建RAID
--auto=yes /dev/md0  //新建立的软件磁盘阵列设备为md0,md序号可以为0-9
--level=5   //磁盘阵列的等级,这里表示创建的是RAID5
--raid-devices //添加作为磁盘阵列用的磁盘的块数
--spare-device //添加作为预备(spare)磁盘的块数
/dev/sd[b-f]  //磁盘阵列所使用的设备,还可写成/dev/sdb /dev/sdc /dev/sdd /dev/sde/dev/sdf

另外这条命令也可以简写:

mdadm –C /dev/md0 –l5 –n4 –x1 /dev/sd[b-f]

第三步:查看RAID是否成功创建及是否正常运行,有两种方法(当磁盘容量越大时,磁盘阵列构建的时间越长,所以可能需要等待很长时间才能看到参下信息):

执行mdadm –detail /dev/md0 命令查看RAID的详细信息:

[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
        Version: 1.2
  Creation Time: Wed Dec 19 02:58:51 2012
     Raid Level: raid5
     Array Size: 62909952 (60.00 GiB 64.42 GB)
  Used Dev Size: 20969984 (20.00 GiB 21.47 GB)
   Raid Devices: 4
  Total Devices: 5
    Persistence: Superblock is persistent
 
    Update Time: Wed Dec 19 03:10:38 2012
          State: clean
 Active Devices: 4
Working Devices : 5
 Failed Devices: 0
  Spare Devices: 1
 
         Layout: left-symmetric
     Chunk Size: 512K
 
           Name: localhost.localdomain:0  (local to hostlocalhost.localdomain)
           UUID: 0459e403:4ba8e027:08852362:64f3d361

         Events: 26

Number   Major  Minor   RaidDevice State
       0      8       16        0     active sync   /dev/sdb
       1      8       32        1     active sync   /dev/sdc
       2      8       48        2     active sync   /dev/sdd
       5      8       64        3     active sync   /dev/sde
    4       8      80        -       spare       /dev/sdf

查看/proc/mdstat文件,可以比较简单明了的查看RAID创建各运行的情况

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sde[5] sdf[4](S) sdd[2] sdc[1]sdb[0]
      62909952blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
ery =  3.8%
      
unused devices: <none>

注意:黄色部分代表md建立的过程,

S 代表预备(spare)磁盘,UUUU代表正常,出现_表示不正常,

当md建立完成后,就显示正常,(大概3分钟OK)

执行mdadm  -Q  /dev/md0  //-Q用来查询设备信息

[root@localhost ~]# mdadm -Q /dev/md0 
/dev/md0: 59.100GiB raid5 4 devices, 1 spare. Usemdadm --detail for more detail.

第四步:格式化并且挂载使用创建的RAID

[root@localhost ~]# mkfs.ext4 /dev/md0 
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
3932160 inodes, 15727488 blocks
786374 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
480 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
       32768,98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
       4096000,7962624, 11239424
 
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accountinginformation: done
 
This filesystem will be automatically checked every 23mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost ~]# mkdir /raid5
[root@localhost ~]# mount /dev/md0 /raid5/

查看新挂的RAID是否可以使用

[root@localhost ~]# df -Th
Filesystem   Type    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
             ext4     19G  5.1G  13G  30% /
tmpfs       tmpfs    250M  264K 250M   1% /dev/shm
/dev/sda1    ext4    485M   30M  430M   7% /boot
/dev/sr0  iso9660    2.9G  2.9G    0 100% /media/RHEL_6.1 i386 Disc 1
/dev/md0      ext4    60G  180M   56G  1% /raid5
[root@localhost ~]# cd /raid5/
[root@localhost raid5]# touch a.txt
 [root@localhostraid5]# cp /etc/passwd ./
[root@localhost raid5]# ls
a.txt  lost+found  passwd

第五步:设置开机自动RAID以及自动挂载:

先建立/etc/mdadm.conf这个配置文件

[root@localhost raid5]# mdadm --detail /dev/md0 | grep-i UUID > /etc/mdadm.conf

上面建立的这个文件需要作小小的修改:

[root@localhost raid5]# vi /etc/mdadm.conf
ARRAY   /dev/md0          UUID=0459e403:4ba8e027:08852362:64f3d361

然后再修改/etc/fstab文件,设置开机自动挂载:

[root@localhost raid5]# vi /etc/fstab
/dev/md0            /raid5           ext4              defaults        0 0

重启系统,就可以检查开机自动挂载有没有设置成功了!

第六步:模拟RAID5中一块磁盘损坏,检验spare磁盘的功能(raid5中允许一块磁盘损坏,我们所设置的那1块spare磁盘会立即替代损坏的磁盘,进行RAID的重建,保障数据的安全)

[root@localhost raid5]# mdadm --manage /dev/md0 --fail/dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0

//使用此命令设置磁盘sdd成为出错的状态

来查看一下:mdadm –detail /dev/md0

[root@localhost raid5]# mdadm --detail /dev/md0
/dev/md0:
        Version: 1.2
  Creation Time: Wed Dec 19 02:58:51 2012
     Raid Level: raid5
     Array Size: 62909952 (60.00 GiB 64.42 GB)
  Used Dev Size: 20969984 (20.00 GiB 21.47 GB)
   Raid Devices: 4
  Total Devices: 5
    Persistence: Superblock is persistent
 
    Update Time: Wed Dec 19 03:52:46 2012
          State: clean, degraded, recovering
 Active Devices: 3
Working Devices : 4
 Failed Devices: 1
  Spare Devices: 1
 
         Layout: left-symmetric
     Chunk Size: 512K
 
 Rebuild Status: 11% complete
 
           Name: localhost.localdomain:0  (local to hostlocalhost.localdomain)
           UUID: 0459e403:4ba8e027:08852362:64f3d361
         Events: 29
 
    Number   Major  Minor   RaidDevice State
0       8      16        0      active sync   /dev/sdb
       1      8       32        1     active sync   /dev/sdc
       4      8       80        2     spare rebuilding   /dev/sdf
       5      8       64        3     active sync   /dev/sde
 
       2      8       48        -     faulty spare   /dev/sdd

可以查看cat /proc/mdstat文件查看RAID5的重建过程

[root@localhost raid5]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sde[5] sdf[4] sdd[2](F) sdc[1]sdb[0]
      62909952blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
=====>...............]  recovery = 28.2% (5922304/20969984)finish=8.5min
      
unused devices: <none>

再来看一下重建后的结果

[root@localhost raid5]# mdadm --detail /dev/md0 
/dev/md0:
        Version: 1.2
  Creation Time: Wed Dec 19 02:58:51 2012
     Raid Level: raid5
     Array Size: 62909952 (60.00 GiB 64.42 GB)
  Used Dev Size: 20969984 (20.00 GiB 21.47 GB)
   Raid Devices: 4
  Total Devices: 5
    Persistence: Superblock is persistent
 
    Update Time: Wed Dec 19 04:04:08 2012
          State: clean
 Active Devices: 4
Working Devices : 4
 Failed Devices: 1
  Spare Devices: 0
 
         Layout: left-symmetric
     Chunk Size: 512K
 
           Name: localhost.localdomain:0  (local to hostlocalhost.localdomain)
           UUID: 0459e403:4ba8e027:08852362:64f3d361
         Events: 65

    Number   Major  Minor   RaidDevice State

0       8      16        0      active sync   /dev/sdb
       1       8      32        1      active sync   /dev/sdc
       4       8      80        2      active sync   /dev/sdf
       5       8      64        3      active sync   /dev/sde
 
       2       8      48        -      faulty spare   /dev/sdd
[root@localhost raid5]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sde[5] sdf[4] sdd[2](F) sdc[1]sdb[0]
      62909952blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>
/raid5 还可以正常使用
[root@localhost ~]# cd /raid5/
[root@localhost raid5]# ls
a.txt lost+found  passwd
[root@localhost raid5]# touch b.txt
[root@localhost raid5]# ls
a.txt  b.txt  lost+found passwd

第七步:将出错的磁盘删除并加入新的磁盘

先删除损坏的磁盘sdd

[root@localhost raid5]# mdadm --manage /dev/md0--remove /dev/sdd
mdadm: hot removed /dev/sdd from /dev/md0

再添加一块新的磁盘作为spare磁盘

[root@localhost raid5]# mdadm --manage /dev/md0 --add/dev/sdg
mdadm: added /dev/sdg       --添加新的磁盘sdd

然后在来查看下,执行一下命令

[root@localhost raid5]# mdadm --detail /dev/md0 
/dev/md0:
        Version: 1.2
  Creation Time: Wed Dec 19 02:58:51 2012
     Raid Level: raid5
     Array Size: 62909952 (60.00 GiB 64.42 GB)
  Used Dev Size: 20969984 (20.00 GiB 21.47 GB)
   Raid Devices: 4
  Total Devices: 5
    Persistence: Superblock is persistent
 
    Update Time: Wed Dec 19 04:12:20 2012
          State: clean
 Active Devices: 4
Working Devices : 5
 Failed Devices: 0
  Spare Devices: 1

 

         Layout: left-symmetric

     Chunk Size: 512K

 

   

Name: localhost.localdomain:0  (local to hostlocalhost.localdomain)
           UUID: 0459e403:4ba8e027:08852362:64f3d361

         Events: 67

Number  Major   Minor   RaidDevice State
       0       8      16        0      active sync   /dev/sdb
       1       8      32        1      active sync   /dev/sdc
       4       8      80        2      active sync   /dev/sdf
       5       8      64        3      active sync   /dev/sde
    6      8       96        -     spare   /dev/sdg

第八步:关闭RAID的方法:

当你不再需要你设置的RAID的,可以用以下方法关闭

卸载/dev/md0 并且删除或注释掉/etc/fstab文件中的配置

[root@localhost~]# umount /dev/md0
[root@localhost~]#vi /etc/fstab
#UUID=6d2590e4-20e9-4d54-9a37-1975fb9fcf47/raid5   ext4  defaults  0 0

使用命令mdadm关闭/dev/md0 并注释掉/etc/mdadm.conf中的设置

[root@localhost ~]# vi /etc/mdadm.conf
#ARRAY  /dev/md0          UUID=0459e403:4ba8e027:08852362:64f3d361

将md0阵列中活跃磁盘先模拟故障(--fail)在删除磁盘(--remove),最后执行下面的命令关闭raid

[root@localhost raid5]# mdadm --manage /dev/md0 --fail/dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[root@localhost raid5]# mdadm --manage /dev/md0 --fail/dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
[root@localhost raid5]# mdadm --manage /dev/md0 --fail/dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0
[root@localhost raid5]# mdadm --manage /dev/md0 --fail/dev/sde
mdadm: set /dev/sde faulty in /dev/md0
[root@localhost raid5]# mdadm --manage /dev/md0 --fail/dev/sdf
mdadm: set /dev/sdf faulty in /dev/md0
[root@localhost raid5]# mdadm --manage /dev/md0 --fail/dev/sdg
mdadm: set /dev/sdg faulty in /dev/md0
[root@localhost ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
RAID0和RAID1等同于RAID5
RAID0:并行读写数据
RAID1:镜像磁盘阵列