[root@szm test]# mdadm --help-options

Any parameter that does not start with '-' is treated as a device name

or, for --examine-bitmap, a file name.

The first such name is often the name of an md device.  Subsequent

names are often names of component devices.

 

Some common options are:

  --help        -h   : General help message or, after above option,

                       mode specific help message

  --help-options     : This help message

  --version     -V   : Print version information for mdadm

  --verbose     -v   : Be more verbose about what is happening

  --quiet       -q   : Don't print un-necessary messages

  --brief       -b   : Be less verbose, more brief

  --export      -Y   : With --detail, use key=value format for easy

                       import into environment

  --force       -f   : Override normal checks and be more forceful

 

  --assemble    -A   : Assemble an array----组建一个已经存在的阵列

  --build       -B   : Build an array without metadata

  --create      -C   : Create a new array

  --detail      -D   : Display details of an array

  --examine     -E   : Examine superblock on an array component

  --examine-bitmap -X: Display the detail of a bitmap file

  --monitor     -F   : monitor (follow) some arrays

  --grow        -G   : resize/ reshape and array---调整阵列大小

  --incremental -I   : add/remove a single device to/from an array as appropriate

  --query       -Q   : Display general information about how a

                       device relates to the md driver

  --auto-detect      : Start arrays auto-detected by the kernel

  --offroot          : Set first character of argv[0] to @ to indicate the

                       application was launched from initrd/initramfs and

                       should not be shutdown by systemd as part of the

                       regular shutdown process.

-f:强制执行一个操作

s:扫描已激活阵列的扩展信息

x:指定热备磁盘的个数

l:指定阵列级别

-a:创建阵列的时候按用户回答的 命令执行相关操作

 

-S:停止一个阵列;
 
-m:给阵列指定的邮件地址发送一封提示信息的邮件
-y:将所有事件通过syslog服务记录日志信息;
 
-f:故障一个阵列中的磁盘
-a:添加磁盘到阵列
-r:从阵列中删除一个磁盘;
 
配置文件:/etc/mdadm简单说明:man mdadm
DEVICE:参与陈列的硬盘或分区设备名称,如/dev/sdb1;
ARRAY:定义所管理陈列的设备和参与这个阵列的磁盘成员,后
spare-group:设定阵列中热备磁盘的共享组;
MAILADDR:
MAILFROM:
PROGRAM:用户在远行mdadm -monitor阵列监控命令的时候所生成的事件信息;
Create:用户在创建阵列的时候设置的值,如“-a yes”提示系统创建阵列设备文件;
HOMEHOST:与-homehost参数功能相同
 
[root@szm test]# mdadm -C /dev/md0 -a yes -l0 -n2 /dev/sdb1 /dev/sdb2----(RAID 0)
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
 [root@szm test]# watch -n1 'cat /proc/mdstat',可以实时查看内在中的RAID状态信息。这样就可以监控RAIN设备的创建过程
 
[root@szm pub]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Mar 11 20:31:05 2013
     Raid Level : raid0
     Array Size : 320512 (313.05 MiB 328.20 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent
 
    Update Time : Mon Mar 11 20:31:05 2013
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
 
     Chunk Size : 512K
 
           Name : szm:0  (local to host szm)
           UUID : 40dbde6c:938f4cce:4bf157ef:2d475d90
         Events : 0
 
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       18        1      active sync   /dev/sdb2
[root@szm Desktop]# mdadm -C /dev/md1 -a yes -l1 -n2 -x1 /dev/sdb6 /dev/sdb7 /dev/sdb8
 RAID1,并添加一个热备磁盘为/dev/sdb8
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? (y/n) y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@szm Desktop]# watch -n1 'cat /proc/mdstat'
Every 1.0s: cat /proc/mdstat                            Mon Mar 11 20:56:11 2013
 
Personalities : [raid0] [raid1]
md1 : active raid1 sdb8[2](S) sdb7[1] sdb6[0]-------热备sdb8
      16000 blocks super 1.2 [2/2] [UU]
 
md127 : active raid0 sdb2[1] sdb1[0]
      320512 blocks super 1.2 512k chunks
 
unused devices: <none>

 U:设备工作正常,"_"下划线表示设备不正常

 
[root@szm Desktop]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Mon Mar 11 20:54:16 2013
     Raid Level : raid1
     Array Size : 16000 (15.63 MiB 16.38 MB)
  Used Dev Size : 16000 (15.63 MiB 16.38 MB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Mon Mar 11 20:54:19 2013
          State : clean 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
 
           Name : szm:1  (local to host szm)
           UUID : e1678f4c:da866bb9:d31def1c:6d065256
         Events : 17
 
    Number   Major   Minor   RaidDevice State
       0       8       22        0      active sync   /dev/sdb6
       1       8       23        1      active sync   /dev/sdb7
 
       2       8       24        -      spare   /dev/sdb8
 

 

[root@szm Desktop]# mdadm -C /dev/md2 -a yes -l5 -n3 /dev/sdb9 /dev/sdb10 /dev/sdb11
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
 
[root@szm Desktop]# mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Mon Mar 11 21:07:18 2013
     Raid Level : raid5
     Array Size : 31744 (31.01 MiB 32.51 MB)
  Used Dev Size : 15872 (15.50 MiB 16.25 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Mon Mar 11 21:07:21 2013
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : szm:2  (local to host szm)
           UUID : 20d48ed4:df7e95f5:8910c556:b6a1eaa5
         Events : 18
 
    Number   Major   Minor   RaidDevice State
       0       8       25        0      active sync   /dev/sdb9
       1       8       26        1      active sync   /dev/sdb10
       3       8       27        2      active sync   /dev/sdb11
 
[root@szm Desktop]# watch -n1 'cat /proc/mdstat'
 
Every 1.0s: cat /proc/mdstat                            Mon Mar 11 21:09:03 2013
 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdb11[3] sdb10[1] sdb9[0]
      31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
 
md126 : active raid0 sdb1[0] sdb2[1]
      320512 blocks super 1.2 512k chunks
 
md127 : active (auto-read-only) raid1 sdb7[1] sdb8[2](S) sdb6[0]
      16000 blocks super 1.2 [2/2] [UU]
 
unused devices: <none>
 
[root@szm mnt]# mkfs.ext3 /dev/md126
[root@szm mnt]# mkfs.ext3 /dev/md127
[root@szm mnt]# mkfs.ext3 /dev/md2
 
[root@szm mnt]# mount /dev/md126 /mnt/md0/
[root@szm mnt]# mount /dev/md127 /mnt/md1
[root@szm mnt]# mount /dev/md2 /mnt/md2
[root@szm mnt]# mount | grep -i md
/dev/md126 on /mnt/md0 type ext3 (rw)
/dev/md127 on /mnt/md1 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
 
[root@szm test]# ll RAIDtest -h
-rw-r--r--. 1 root root 10M Mar 11 21:16 RAIDtest
[root@szm test]# cp RAIDtest /mnt/md0
[root@szm test]# cp RAIDtest /mnt/md1
[root@szm test]# cp RAIDtest /mnt/md2
 
[root@szm test]# df -h | grep md
/dev/md126            304M   21M  268M   7% /mnt/md0
/dev/md127             16M   12M  3.2M  78% /mnt/md1
/dev/md2               31M   12M   18M  41% /mnt/md2
 

 1.生成阵列配置文件2.添加阵列故障邮件地址为Root;

[root@szm test]# mdadm --examine --scan > /etc/mdadm.conf 
[root@szm test]# echo "MAILADDR root" >> /etc/mdadm.conf
[root@szm test]# cat /etc/mdadm.conf 
ARRAY /dev/md/0 metadata=1.2 UUID=40dbde6c:938f4cce:4bf157ef:2d475d90 name=szm:0
ARRAY /dev/md/1 metadata=1.2 UUID=e1678f4c:da866bb9:d31def1c:6d065256 name=szm:1
   spares=1
ARRAY /dev/md/2 metadata=1.2 UUID=20d48ed4:df7e95f5:8910c556:b6a1eaa5 name=szm:2
MAILADDR root
 
 镜像盘gu'zhan
[root@szm test]# mdadm /dev/md127 -f /dev/sdb8 -r /dev/sdb8
mdadm: set /dev/sdb8 faulty in /dev/md127
mdadm: hot removed /dev/sdb8 from /dev/md127
 
[root@szm test]# watch -n1 'cat /proc/mdstat'
 
Every 1.0s: cat /proc/mdstat                                                    Mon Mar 11 21:27:37 2013
 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdb11[3] sdb10[1] sdb9[0]
      31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
 
md126 : active raid0 sdb1[0] sdb2[1]
      320512 blocks super 1.2 512k chunks
 
md127 : active raid1 sdb7[1] sdb6[0]
      16000 blocks super 1.2 [2/2] [UU]-------------已经修复了
 
unused devices: <none>
 
[root@szm test]# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Mon Mar 11 20:54:16 2013
     Raid Level : raid1
     Array Size : 16000 (15.63 MiB 16.38 MB)
  Used Dev Size : 16000 (15.63 MiB 16.38 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent
 
    Update Time : Mon Mar 11 21:25:32 2013
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
 
           Name : szm:1  (local to host szm)
           UUID : e1678f4c:da866bb9:d31def1c:6d065256
         Events : 19
 
    Number   Major   Minor   RaidDevice State
       0       8       22        0      active sync   /dev/sdb6
       1       8       23        1      active sync   /dev/sdb7
 RAID5故障模拟:
[root@szm test]# mdadm /dev/md2 -f /dev/sdb11 -r /dev/sdb11
mdadm: set /dev/sdb11 faulty in /dev/md2
mdadm: hot removed /dev/sdb11 from /dev/md2
[root@szm test]# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md2 : active raid5 sdb10[1] sdb9[0]
      31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      
md126 : active raid0 sdb1[0] sdb2[1]
      320512 blocks super 1.2 512k chunks
      
md127 : active raid1 sdb7[1] sdb6[0]
      16000 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
[root@szm test]# 
 
[root@szm test]# mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Mon Mar 11 21:07:18 2013
     Raid Level : raid5
     Array Size : 31744 (31.01 MiB 32.51 MB)
  Used Dev Size : 15872 (15.50 MiB 16.25 MB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent
 
    Update Time : Mon Mar 11 21:29:52 2013
          State : clean, degraded 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : szm:2  (local to host szm)
           UUID : 20d48ed4:df7e95f5:8910c556:b6a1eaa5
         Events : 20
 
    Number   Major   Minor   RaidDevice State
       0       8       25        0      active sync   /dev/sdb9
       1       8       26        1      active sync   /dev/sdb10
       2       0        0        2      removed
 故障恢复:
[root@szm test]# mdadm /dev/md2 -a /dev/sdb11
mdadm: added /dev/sdb11
[root@szm test]# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md2 : active raid5 sdb11[3] sdb10[1] sdb9[0]
      31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
md126 : active raid0 sdb1[0] sdb2[1]
      320512 blocks super 1.2 512k chunks
      
md127 : active raid1 sdb7[1] sdb6[0]
      16000 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
 
[root@szm test]# mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Mon Mar 11 21:07:18 2013
     Raid Level : raid5
     Array Size : 31744 (31.01 MiB 32.51 MB)
  Used Dev Size : 15872 (15.50 MiB 16.25 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Mon Mar 11 21:32:18 2013
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : szm:2  (local to host szm)
           UUID : 20d48ed4:df7e95f5:8910c556:b6a1eaa5
         Events : 41
 
    Number   Major   Minor   RaidDevice State
       0       8       25        0      active sync   /dev/sdb9
       1       8       26        1      active sync   /dev/sdb10
       3       8       27        2      active sync   /dev/sdb11
 
 启用阵列监控服务:上面已经配置好了,1那里。

 

 

[root@szm test]# /etc/init.d/mdmonitor restart

Killing mdmonitor:                                         [  OK  ]

Starting mdmon:                                            [  OK  ]

Starting mdmonitor:                                        [  OK  ]

 N 29 mdadm monitoring      Mon Mar 11 21:40  35/1106  "FailSpare event on /dev/md2:szm"
& 29
Message 29:
From root@szm.localdomain  Mon Mar 11 21:40:34 2013
Return-Path: <root@szm.localdomain>
X-Original-To: root
Delivered-To: root@szm.localdomain
From: mdadm monitoring <root@szm.localdomain>
To: root@szm.localdomain
Subject: FailSpare event on /dev/md2:szm
Date: Mon, 11 Mar 2013 21:40:34 +0800 (CST)
Status: R
 
This is an automatically generated mail message from mdadm
running on szm
 
A FailSpare event had been detected on md device /dev/md2.
 
It could be related to component device /dev/sdb11.
 
Faithfully yours, etc.
 
P.S. The /proc/mdstat file currently contains the following:
 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md2 : active raid5 sdb10[1] sdb9[0]
      31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      
md126 : active raid0 sdb1[0] sdb2[1]
      320512 blocks super 1.2 512k chunks
      
md127 : active raid1 sdb7[1] sdb6[0]
      16000 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
 
 

 

 删除阵列

 

[root@szm ~]# umount /mnt/md0

[root@szm ~]# mdadm -S /dev/md126

 

 

[root@szm ~]# cat /etc/mdadm.conf ------------删除对应内容