准备一台虚拟机,系统安装完成后,关机,添加4块40G虚拟硬盘

就需要使用 mdadm 中的参数了。其中,-C 参数代表创建一个 RAID 阵列卡;-v 参
数显示创建的过程,同时在后面追加一个设备名称/dev/md0,这样/dev/md0就是创建后的RAID
磁盘阵列的名称;-a yes 参数代表自动创建设备文件;-n 4 参数代表使用 4 块硬盘来部署这个
RAID 磁盘阵列;而-l 10 参数则代表 RAID 10 方案;最后再加上 4 块硬盘设备的名称就搞定
了

mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde

把制作好的 RAID 磁盘阵列格式化为 ext4 格式

 

[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
5242880 inodes, 20954624 blocks
1047731 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2168455168
640 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@localhost ~]# 

 

 

再次,创建挂载点然后把硬盘设备进行挂载操作。挂载成功后可看到可用空间为 80GB

[root@localhost ~]# mkdir /RAID
[root@localhost ~]# mount /dev/md0 /RAID
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  6.6M  480M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   37G  1.4G   36G   4% /
/dev/sda1               1014M  138M  877M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/md0                  79G   57M   75G   1% /RAID
[root@localhost ~]# 

 

最后,查看/dev/md0 磁盘阵列的详细信息,并把挂载信息写入到配置文件中,使其永久 生效

[root@localhost ~]# 
[root@localhost ~]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Mon Sep 27 10:21:05 2021
        Raid Level : raid10
        Array Size : 83818496 (79.94 GiB 85.83 GB)
     Used Dev Size : 41909248 (39.97 GiB 42.92 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Mon Sep 27 10:32:12 2021
             State : active 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 5b88152d:76a1d34d:aa27e583:469c1698
            Events : 20

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde
[root@localhost ~]# 

 

[root@localhost ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab
[root@localhost ~]#   永久生效

 

在确认有一块物理硬盘设备出现损坏而不能继续正常使用后,应该使用 mdadm 命令(-f参数)将其 移除,然后查看 RAID 磁盘阵列的状态,可以发现状态已经改变

[root@localhost ~]# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[root@localhost ~]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Mon Sep 27 10:21:05 2021
        Raid Level : raid10
        Array Size : 83818496 (79.94 GiB 85.83 GB)
     Used Dev Size : 41909248 (39.97 GiB 42.92 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Mon Sep 27 10:43:01 2021
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 5b88152d:76a1d34d:aa27e583:469c1698
            Events : 21

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde

       0       8       16        -      faulty   /dev/sdb
[root@localhost ~]# 

 

在 RAID 10 级别的磁盘阵列中,当 RAID 1 磁盘阵列中存在一个故障盘时并不影响 RAID 10 磁盘阵列的使用。当购买了新的硬盘设备后再使用 mdadm 命令来予以替换即可,在此期间 我们可以在/RAID 目录中正常地创建或删除文件。由于我们是在虚拟机中模拟硬盘,所以先 重启系统,然后再把新的硬盘添加到 RAID 磁盘阵列中

[root@localhost ~]# umount /RAID/
[root@localhost ~]# mdadm /dev/md
md/  md0  
[root@localhost ~]# mdadm /dev/md0 -a /dev/sdb
mdadm: added /dev/sdb
[root@localhost ~]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Mon Sep 27 10:21:05 2021
        Raid Level : raid10
        Array Size : 83818496 (79.94 GiB 85.83 GB)
     Used Dev Size : 41909248 (39.97 GiB 42.92 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Mon Sep 27 10:46:43 2021
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 4% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 5b88152d:76a1d34d:aa27e583:469c1698
            Events : 29

    Number   Major   Minor   RaidDevice State
       4       8       16        0      spare rebuilding   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde
[root@localhost ~]#