本章Blog相关知识点:


  • Raid简介

Raid是英文Redundant Array of Independent Disks 的缩写,翻译成中文意思是“独立磁盘冗余阵列”,有时也简称磁盘阵列(Disk Array)。简单的说,RAID是一种把多块独立的硬盘(物理硬盘)按不同的方式组合起来形成一个硬盘组(逻辑硬盘),从而提供比单个硬盘更高的存储性能和提供数据备份的技术。组成磁盘阵列的不同方式称为RAID级别(RAID Levels),常见的RAID Level 包括raid0, raid1, raid5, raid10,raid50。

读写性能提升;无冗余能力;100% 利用率;至少2块盘,数据平均分配到多个磁盘。

    Raid1(镜像 ):读性能提升,写性能下降;有冗余能力;1/2 利用率; 至少2块盘,数据在多个磁盘分别存一份数据 ;

    Raid 1+0 : 读性能提升,写性能下降;有冗余能力;可允许不同分组内坏一块磁盘 ,1/2利用率;至少4块盘

有冗余能力;{n-1}/n利用率;至少3块盘。

    jbod, 拥有多块磁盘,但不同时工作。用于把小空间提升为大空间 。

注:Raid10 性能   优于   Raid01性能,数据镜像(目的,避免设备损坏而导致业务终止),但不能取代数据备份 。

  • mdadm模式化工具及相关命令  

#mdadm [mode] <raiddevice> [options] <component-devices>

    常用选项及功能介绍:

    -A    Assemble 装配模式         

    -C    create  /dev/md0  创建模式     

        -n  # ,用于创建raid 设备的 磁盘个数

        -x  # ,热备磁盘个数

        -l  级别 ,raid level 级别

        -a  yes  ,自动为创建Raid设备创建设备文件

        -c chunksize ,指定分块大小,默认为512kB

    -F     Follow or Monitor   监控模式   

    -D    --detatil  显示阵列详细信息   

    管理模式: -f  模拟为损坏 , -r  模拟移除 ,-a   模拟新增   

    -S    stop 阵列

    -A    重新启动md 例:# mdadm -A /dev/md10 /dev/sdb{1,3}   重新启动md10

    -Ds   显示阵列信息,例 #mdadm -Ds >> /etc/mdadm.conf


部署配置软Raid实验


系统环境

      系统:Centos 7.4 最小化安装

      mdadm管理工具:mdadm-4.0-5.el7.x86_64

      磁盘分区:挂载/dev/sdb 、/dev/sdc     ...     /dev/sdl 共11块磁盘

实验目的:

    1、通过软raid 配置,分别实现2块磁盘组成Raid 5;8块磁盘组成raid 10 +1 热备盘。

    2、实现开机自动挂载

模拟单盘故障及磁盘替换。
实验步骤:

一、增加磁盘(虚拟机中增加11块20G磁盘),安装mdadm 管理工具。     

Centos7如何做RAID_Centos7如何做RAID

[root@study ~]# rpm -qa mdadm   查询是否安装了mdadm工具
[root@study ~]# yum install mdadm -y  安装mdadm工具

二、对11块磁盘分别创建1个主分区 ,并修改分区类型fd



[root@study ~]# fdisk -l | grep "^Disk\b"
Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors
Disk label type: dos
Disk identifier: 0x000d9648
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sde: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdf: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdg: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdh: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdi: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdj: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdl: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdk: 21.5 GB, 21474836480 bytes, 41943040 sectors

[root@study ~]# fdisk  /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x2974d4fb.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x2974d4fb

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@study ~]# echo 'n        通过命令实现磁盘自动分区并调整分区类型为fd
> p
> 1
> 
> 
> t
> fd
> w' |fdisk /dev/sdc

[root@study ~]#  reboot    通过reboot命令,实现内核重新加载磁盘分区

[root@study ~]# fdisk -l | grep "^/dev/sd"       完成分区后磁盘状态查看
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/sda3        12584960   251658239   119536640   8e  Linux LVM
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdf1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdg1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdh1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdi1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdj1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdl1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdk1            2048    41943039    20970496   fd  Linux raid autodetect

[root@study ~]# cat /proc/partitions | grep "1$"   查看内核加载分区
   8        1    2097152 sda1
   8       17   20970496 sdb1
   8       49   20970496 sdd1
   8       33   20970496 sdc1
   8       65   20970496 sde1
   8       81   20970496 sdf1
   8       97   20970496 sdg1
   8      113   20970496 sdh1
   8      129   20970496 sdi1
   8      145   20970496 sdj1
   8      177   20970496 sdl1
   8      161   20970496 sdk1
  253       1   20971520 dm-1



说明:

# fdisk  /dev/sdb  对/dev/sdb 进行分区;

输入 "m", 获取帮助;

输入 "p", 查看分区前磁盘状态;

输入“n”,新建磁盘分区;

输入“e”,新建逻辑分区;

输入“p”,新建主分区;

输入“t”,修改分区类型;

输入“fd”,调整为linux raid autodetect 类型 ;

输入“p”,显示分区状态表;

输入“w”,保存新建分区。


三、使用2块磁盘创建raid0,使用8块磁盘创建raid10+1块热备盘



[root@study ~]# mdadm -C /dev/md0 -a yes -c 1024 -l 0 -n 2 /dev/sd{b,c}1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

[root@study ~]# mdadm -C /dev/md10 -a yes -c 1024 -l 10 -n 8 /dev/sd{d..k}1 -x 1 /dev/sdl1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

[root@study ~]# mdadm -D /dev/md0  /dev/md10
/dev/md0:
           Version : 1.2
..
/dev/md10:
       Version : 1.2
       Creation Time : Sun Jan 28 17:29:21 2018
       Raid Level : raid10
       Array Size : 83816448 (79.93 GiB 85.83 GB)
       Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
       Raid Devices : 8
       Total Devices : 9
       Persistence : Superblock is persistent

       Update Time : Sun Jan 28 17:31:06 2018
       State : clean, resyncing 
       Active Devices : 8
       Working Devices : 9
       Failed Devices : 0
       Spare Devices : 1

        Layout : near=2
        Chunk Size : 1024K

Consistency Policy : resync

            Name : study.itwish.cn:10  (local to host study.itwish.cn)
            UUID : 34dfaf9d:f7664825:c6968e4c:eaa15141
            Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync set-A   /dev/sdd1
       1       8       65        1      active sync set-B   /dev/sde1
       2       8       81        2      active sync set-A   /dev/sdf1
       3       8       97        3      active sync set-B   /dev/sdg1
       4       8      113        4      active sync set-A   /dev/sdh1
       5       8      129        5      active sync set-B   /dev/sdi1
       6       8      145        6      active sync set-A   /dev/sdj1
       7       8      161        7      active sync set-B   /dev/sdk1
       8       8      177        -      spare   /dev/sdl1



说明:

# mdadm -C /dev/md0 -a yes -c 1024 -l 0 -n 2 /dev/sd{b,c}1   使用2快磁盘/dev/sdb1,/dev/sdc1创建raid 0 ,阵列名称为/dev/md0,且指定分块大小1024kB
    -C  /dev/md0: 创建阵列/dev/md0;
    -a yes:同意创建设备,推荐使用-a yes参数一次性创建;
    -l 0: raid0阵列模式
指定分块大小,默认为512kB
    -n  2: 指定阵列中活动磁盘的数目
    /dev/md0  阵列的设备名称;
    /dev/sd{b,c}1  参与创建阵列的磁盘名称
    /dev/md0 ,/dev/md10   阵列的设备名称;
# mdadm -D /dev/md0  /dev/md10   查看阵列状态
    Raid Level :       阵列级别;  
    Array Size :       阵列容量大小;
    Raid Devices :    RAID成员的个数;
    Total Devices :   RAID中下属成员的总计个数,包含冗余硬盘或分区,如spare,
    State : clean, degraded, recovering 状态。clean 表示正常,degraded 表示有问题,recovering 表示正在恢复或构建;
    Active Devices :      被激活的RAID成员个数;
    Working Devices :  正常的工作的RAID成员个数;
    Failed Devices :      出问题的RAID成员;
    Spare Devices :      备用RAID成员个数,当一个RAID的成员出问题时,用其它硬盘或分区来顶替;
    UUID :                    RAID的UUID值,在系统中是唯一的;


四、格式/dev/md0和/dev/md10分区并挂载



[root@study ~]# mke2fs -t ext4 -L myraid0 -m 2 -b 4096  /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=myraid0
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=256 blocks, Stripe width=512 blocks
2621440 inodes, 10477056 blocks
209541 blocks (2.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@study ~]# mke2fs -t ext4 -L myraid10 -m 2 -b 4096  /dev/md10
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=myraid10
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=256 blocks, Stripe width=1024 blocks
5242880 inodes, 20954112 blocks
419082 blocks (2.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2168455168
640 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   
[root@study ~]# mkdir -p /raid0 /raid10             创建挂载路径
                
[root@study ~]# mount  -o acl /dev/md0 /raid0   重新挂载并支持acl功能

[root@study ~]# mount -o acl /dev/md10 /raid10/

[root@study ~]# mount | tail -2
/dev/md0 on /raid0 type ext4 (rw,relatime,seclabel,stripe=512,data=ordered)
/dev/md10 on /raid10 type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)

[root@study ~]# df -h |tail -2
/dev/md0              40G   49M   39G   1% /raid0
/dev/md10             79G   57M   77G   1% /raid10



说明:


# mke2fs 创建文件系统

    -t  fstype /dev/somdevice  指定文件类型后,对磁盘进行格式化

    -L label   指定卷标名称

    -b {1024/2048/4096} 指定块大小

    -m  预留管理空间的百分比

    -O  指定分区特性


#mount [options]:

    直接# mount,显示当前系统所有已被挂载的分区

    -a  自动挂载所有(/etc/fstab 文件中)支持自动挂载的设备

    -t fstype 指定文件系统ext类型

    -r  只读挂载

    -w  读写挂载

    -o [ options ]

        remount 重新挂载 ,例  # mount -o remount,rw /  重新以读写方式挂载/ 根 

        acl  启动acl功能


五、配置开机自动挂载,必须配置/etc/mdadm.conf 文件 。重启确认,完成开机自动挂载raid0 和raid10



[root@study ~]# echo "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1" >> /etc/mdadm.conf 

[root@study ~]# mdadm -Ds /dev/md{0,10} >> /etc/mdadm.conf 

[root@study ~]# cat /etc/mdadm.conf
cat /etc/mdadm.conf 
DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1
ARRAY /dev/md0 metadata=1.2 name=study.itwish.cn:0 UUID=491cf3f6:52a790ee:fcc232a0:83a1f6b7
ARRAY /dev/md10 metadata=1.2 spares=1 name=study.itwish.cn:10 UUID=34dfaf9d:f7664825:c6968e4c:eaa15141

[root@study ~]# vi /etc/fstab         配置开启启动文件,添加最后两行

/dev/md0               /raid0                 ext4      defaults        0 2 
/dev/md10              /raid10                ext4      defaults        0 2

                                                                                                                                                                                                                                                                                       

[root@study ~]# reboot

[root@study ~]# df -l | grep "^/dev/md"      重启后确认,开机自动挂载raid0 和raid10 
/dev/md10            82368920   57368  80618840   1% /raid10
/dev/md0             41118944   49176  40215220   1% /raid0
 
  
#mdadm  -Ds   显示阵列信息,例 #mdadm -Ds  /dev/md0 >> /etc/mdadm.conf

注: 一定要配置/etc/mdadm.conf 文件  ,实现开机自动挂载  。若不配置该文件 ,则/dev/md0 和 /dev/md10 会自动改变名称为 /dev/md126  和 /dev/md127  ,且无法自动挂载磁盘


六、模拟扩展磁盘:单磁盘故障、扩展磁盘容量

6.1、以raid 10为例,模拟单磁盘故障 ,移除故障硬盘,添加新磁盘 。 通过实验:磁盘发生故障时,热备盘会自动顶替故障磁盘工作,阵列也能够在短时间内实现重建。


[root@study ~]# cp -a /boot/* /raid10/   拷贝数据文件到/raid10 中

[root@study ~]# ls /raid10/
config-3.10.0-693.el7.x86_64                             initrd-plymouth.img
efi                                                      lost+found
grub                                                     symvers-3.10.0-693.el7.x86_64.gz
grub2                                                    System.map-3.10.0-693.el7.x86_64
initramfs-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1.img  vmlinuz-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1
initramfs-3.10.0-693.el7.x86_64.img                      vmlinuz-3.10.0-693.el7.x86_64
initramfs-3.10.0-693.el7.x86_64kdump.img

[root@study ~]# mdadm  -f  /dev/md10 /dev/sdd1     模拟/dev/sdd1 损坏
mdadm: set /dev/sdd1 faulty in /dev/md10

[root@study ~]# mdadm -D /dev/md10    通过查看/dev/md10 状态确认 ,/dev/sdl1 正常挂载并替换损坏的/dev/sdd1 盘 ,而/dev/sdd1 盘状态为faulty
/dev/md10:
           Version : 1.2
     Creation Time : Sun Jan 28 17:29:21 2018
        Raid Level : raid10
        Array Size : 83816448 (79.93 GiB 85.83 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 8
     Total Devices : 9
       Persistence : Superblock is persistent

       Update Time : Sun Jan 28 18:24:24 2018
             State : clean, degraded, recovering 
    Active Devices : 7
   Working Devices : 8
    Failed Devices : 1
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 1024K

Consistency Policy : resync

    Rebuild Status : 36% complete

              Name : study.itwish.cn:10  (local to host study.itwish.cn)
              UUID : 34dfaf9d:f7664825:c6968e4c:eaa15141
            Events : 24

    Number   Major   Minor   RaidDevice State
       8       8      177        0      spare rebuilding   /dev/sdl1
       1       8       65        1      active sync set-B   /dev/sde1
       2       8       81        2      active sync set-A   /dev/sdf1
       3       8       97        3      active sync set-B   /dev/sdg1
       4       8      113        4      active sync set-A   /dev/sdh1
       5       8      129        5      active sync set-B   /dev/sdi1
       6       8      145        6      active sync set-A   /dev/sdj1
       7       8      161        7      active sync set-B   /dev/sdk1

       0       8       49        -      faulty   /dev/sdd1


[root@study ~]# mdadm -r /dev/md10 /dev/sdd1       模拟删除损坏的/dev/sdd1 盘 
mdadm: hot removed /dev/sdd1 from /dev/md10

[root@study ~]# mdadm -a /dev/md10 /dev/sdd1      模拟/dev/md10 重新装入新盘 /dev/sdd1 
mdadm: added /dev/sdd1

[root@study ~]# mdadm -D /dev/md10    通过查看/dev/md10 状态,确认新装入的/dev/sdd1 盘做备份盘存在
/dev/md10:
           Version : 1.2
     Creation Time : Sun Jan 28 17:29:21 2018
        Raid Level : raid10
        Array Size : 83816448 (79.93 GiB 85.83 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 8
     Total Devices : 9
       Persistence : Superblock is persistent

       Update Time : Sun Jan 28 18:27:55 2018
             State : clean, degraded, recovering 
    Active Devices : 7
   Working Devices : 9
    Failed Devices : 0
     Spare Devices : 2

            Layout : near=2
        Chunk Size : 1024K

Consistency Policy : resync

    Rebuild Status : 95% complete

              Name : study.itwish.cn:10  (local to host study.itwish.cn)
              UUID : 34dfaf9d:f7664825:c6968e4c:eaa15141
            Events : 36

    Number   Major   Minor   RaidDevice State
       8       8      177        0      spare rebuilding   /dev/sdl1
       1       8       65        1      active sync set-B   /dev/sde1
       2       8       81        2      active sync set-A   /dev/sdf1
       3       8       97        3      active sync set-B   /dev/sdg1
       4       8      113        4      active sync set-A   /dev/sdh1
       5       8      129        5      active sync set-B   /dev/sdi1
       6       8      145        6      active sync set-A   /dev/sdj1
       7       8      161        7      active sync set-B   /dev/sdk1

       9       8       49        -      spare   /dev/sdd1


[root@study ~]# ls /raid10/         查看磁盘数据依然存在,磁盘的损坏未对数据造成影响。
config-3.10.0-693.el7.x86_64                             initrd-plymouth.img
efi                                                      lost+found
grub                                                     symvers-3.10.0-693.el7.x86_64.gz
grub2                                                    System.map-3.10.0-693.el7.x86_64
initramfs-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1.img  vmlinuz-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1
initramfs-3.10.0-693.el7.x86_64.img                      vmlinuz-3.10.0-693.el7.x86_64
initramfs-3.10.0-693.el7.x86_64kdump.im


  至此,Centos 系统配置软Raid 实验及测试完成。



转载于:https://blog.51cto.com/itwish/2066115