http://linux.chinaunix.net/techdoc/net/2008/11/28/1048953.shtml
http://caidelie0801.blog.163.com/blog/static/453588212009106102922288/
本测试采用软Raid5+LVM,由于没有足够数量的硬盘设备,只好采用分区代替硬盘的方法实现Raid5阵列;我在安装Linux时预留了2GB多的空闲容量,在原来的分区表上新增加了四个分区,分别是:/dev/hdc12、/dev/hdc13、/dev/hdc14、/dev/hdc115,其中hdc12-14用来实现软raid5,hdc15用来模拟代替raid5其中有一分区损坏的情况,整个实验的具体操作过程如下:
首先查看硬盘/dev/hdc的使用情况,
[root@tsai ~]#df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hdc8 2030736 920924 1004992 48% /
/dev/hdc6 46633 8576 35649 20% /boot
/dev/shm 192828 0 192828 0% /dev/shm
/dev/hdc10 1019208 40096 926504 5% /home
/dev/hdc7 3050060 2446648 445980 85% /usr
/dev/hdc9 1011448 81208 878032 9% /var
这是还没有实现软raid5的硬盘使用情况列表
[root@tsai ~]#fdisk /dev/hdc
创建四个分区,根据你的情况进行操作,简单步骤如下:
Command (m for help):n->l->Enter->+100M->t->12->fd
注释:
n:新建分区
l:创建逻辑分区
Enter:默认起始磁柱,直接按回车,
+100M:分区容量大小
t:分区文件系统格式
12:给哪个分区的文件系统格式
fd:选择文件系统格式,按L会提示系统所有支持的格式
进行三次新建分区后,按p会显示如下信息:
注:你的信息可能跟这里不一样。
Disk /dev/hdc: 20.4 GB, 20491075584 bytes
255 heads, 63 sectors/track, 2491 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 1 523 4200966 b W95 FAT32
/dev/hdc2 524 2482 15735667+ f W95 Ext'd (LBA)
/dev/hdc5 524 1175 5237158+ 7 HPFS/NTFS
/dev/hdc6 * 1176 1181 48163+ 83 Linux ##带有启动标识的分区
/dev/hdc7 1182 1573 3148708+ 83 Linux
/dev/hdc8 1574 1834 2096451 83 Linux
/dev/hdc9 1835 1964 1044193+ 83 Linux
/dev/hdc10 1965 2095 1052226 83 Linux
/dev/hdc11 2096 2160 522081 82 Linux swap / Solaris
/dev/hdc12 2161 2173 104391 fd Linux raid autodetect
/dev/hdc13 2174 2186 104391 fd Linux raid autodetect
/dev/hdc14 2187 2199 104391 fd Linux raid autodetect
记得重启一下系统,否则下面操作进行不了或操作有错误提示(注:别在公司运作的服务器上进行操作,否则后果很严重)。
[root@tsai ~]#mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hdc12 /dev/hdc13 /dev/hdc14
mdadm: /dev/hdc12 appears to contain an ext2fs file system
size=104388k mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/hdc12 appears to be part of a raid array:
level=5 devices=3 ctime=Wed Nov 4 10:56:29 2009
mdadm: /dev/hdc13 appears to contain an ext2fs file system
size=104388k mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/hdc13 appears to be part of a raid array:
level=5 devices=3 ctime=Wed Nov 4 10:56:29 2009
mdadm: /dev/hdc14 appears to contain an ext2fs file system
size=104388k mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/hdc14 appears to be part of a raid array:
level=5 devices=3 ctime=Wed Nov 4 10:56:29 2009
Continue creating array?y ##是否继续创建raid阵列?
mdadm: array /dev/md0 started.
上面命令创建了raid5,不知道用硬盘会否出现同样的信息呢?
[root@tsai ~]#cat /proc/mdstat
应该可以看到如下信息:
Personalities : [raid5]
md0 : active raid5 hdc14[2] hdc13[1] hdc12[0]
208640 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices:
再用tail命令查检确定一下:
[root@tsai ~]#tail 10 /var/log/message ##显示的信息如下:
==> /var/log/messages <==
Nov 4 11:56:28 tsai kernel: md: using maximum available idle IO bandwith (but not more than 200000 KB/sec) for reconstruction.
Nov 4 11:56:28 tsai kernel: md: using 128k window, over a total of 104320 blocks.
Nov 4 11:56:48 tsai kernel: md: md0: sync done.
Nov 4 11:56:48 tsai kernel: RAID5 conf printout:
Nov 4 11:56:48 tsai kernel: --- rd:3 wd:3 fd:0
Nov 4 11:56:48 tsai kernel: disk 0, o:1, dev:hdc12
Nov 4 11:56:48 tsai kernel: disk 1, o:1, dev:hdc13
Nov 4 11:56:48 tsai kernel: disk 2, o:1, dev:hdc14
Nov 4 12:01:02 tsai crond(pam_unix)[2322]: session opened for user root by (uid=0)
Nov 4 12:01:02 tsai crond(pam_unix)[2322]: session closed for user root
[root@tsai ~]#mdadm --detail /dev/md0 ##查看更多关于raid的详细信息:
dev/md0:
Version : 00.90.01 ##版本号
Creation Time : Wed Nov 4 11:56:28 2009 ##创建时间
Raid Level : raid5 ##raid方式(级别):如有0、1、5等
Array Size : 208640 (203.75 MiB 213.65 MB) ##阵列容量大小
Device Size : 104320 (101.88 MiB 106.82 MB) ##
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Nov 4 12:11:07 2009
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 26ec8ade:deee8206:48a0f15f:faefb3f5
Events : 0.8
Number Major Minor RaidDevice State
0 22 12 0 active sync /dev/hdc12
1 22 13 1 active sync /dev/hdc13
2 22 14 2 active sync /dev/hdc14
[root@tsai ~]#mdada --detail --scan > /etc/mdadm.conf ##制作mdadm.conf配置文件
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=26ec8ade:deee8206:48a0f15f:faefb3f5
devices=/dev/hdc12,/dev/hdc13,/dev/hdc14
[root@tsai ~]#pvcreate /dev/md0 ##创建物理卷
Physical volums "/dev/md0" successfully created
[root@tsai ~]#vgcreate LVM1 /dev/md0 ##创建卷组
Volume group "LVM1" successfully created
看到successfully表示成功创建。
[root@tsai ~]#vgdisplay LVM1 ##显示卷组信息
--- Volume group ---
VG Name LVM1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 200.00 MB
PE Size 4.00 MB
Total PE 50
Alloc PE / Size 0 / 0
Free PE / Size 50 / 200.00 MB
VG UUID tJKHNs-lZPh-fG0B-xnGA-Wc2n-lm2M-uZK5Yk
[root@tsai ~]#lvcreate -L 100M -n www1 LVM1 ##创建逻辑卷
Logical volume "www1" created
[root@tsai ~]#lvcreate -L 100M -n www2 LVM1
Logical volum "www2" created
注释:
-L: --size LogicalVolumeSize[kKmMgGtT] ##逻辑卷容量大小
-n: --name LogicalVolumeName ##逻辑卷卷名
[root@tsai ~]#mke2fs -j /dev/LVM1/www1 ##格式化逻辑卷
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
25688 inodes, 102400 blocks
5120 blocks (5.00%) reserved for the super user
First da
Maximum filesystem blocks=67371008
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Writing inode tables: 26%_
done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@tsai ~]#mkdir /www1 /www2
[root@tsai ~]#mount /dev/LVM1/www1 /www1
[root@tsai ~]#mount /dev/LVM1/www2 /www2
建立挂载点www1,www2,并将逻辑卷www1、www2挂载上。
[root@tsai ~]#df ##再用df查看硬盘情况,如下所示:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hdc8 2030736 920960 1004956 48% /
/dev/hdc6 46633 8576 35649 20% /boot
/dev/shm 192828 0 192828 0% /dev/shm
/dev/hdc10 1019208 40096 926504 5% /home
/dev/hdc7 3050060 2446648 445980 85% /usr
/dev/hdc9 1011448 81256 877984 9% /var
/dev/mapper/LVM1-www1
99150 5664 88366 7% /www1
/dev/mapper/LVM1-www2
99150 5664 88366 7% /www2
逻辑卷www1、www2都挂载上,可以使用。
[root@tsai ~]#cd /www1;ls
lost+found
以下是模拟/dec/hdc13损坏了,将其移除,并用/dev/hdc15更换上的操作:
工具mdadm为我们提供了模拟设备损坏的命令选项
[root@tsai ~]#mdadm /dev/md0 -f /dev/hdc13 ##模拟/dev/hdc13损坏情况
mdadm: set /dev/hdc13 faulty in /dev/md0
[root@tsai ~]#tail /var/log/message ##查看错误日志,确定哪个硬盘或分区有问题
==> /var/log/messages <==
Nov 4 12:35:28 tsai kernel: raid5: Disk failure on hdc13, disabling device. Operation continuing on 2 devices
Nov 4 12:35:28 tsai kernel: RAID5 conf printout:
Nov 4 12:35:28 tsai kernel: --- rd:3 wd:2 fd:1
Nov 4 12:35:28 tsai kernel: disk 0, o:1, dev:hdc12
Nov 4 12:35:28 tsai kernel: disk 1, o:0, dev:hdc13 ##由原来o:1变成了 o:0,说明此设备有问题。
Nov 4 12:35:28 tsai kernel: disk 2, o:1, dev:hdc14
Nov 4 12:35:28 tsai kernel: RAID5 conf printout:
Nov 4 12:35:28 tsai kernel: --- rd:3 wd:2 fd:1
Nov 4 12:35:28 tsai kernel: disk 0, o:1, dev:hdc12 ##从这两处也可以得知哪个设备损坏了。
Nov 4 12:35:28 tsai kernel: disk 2, o:1, dev:hdc14
[root@tsai ~]#mdadm /dev/md0 -r /dev/hdc13 ##移除有问题的硬盘或分区
mdadm: hot removed /dev/hdc13
[root@tsai ~]#mdadm /dev/md0 -a /dev/hdc15 ##安装上新的硬盘或分区,在我的机子会自动修复数据。
mdadm: hot added /dev/hdc15
[root@tsai ~]#tail /var/log/message ##查看新安装的硬盘或分区是否正常运作
==> /var/log/messages <==
Nov 4 12:50:41 tsai kernel: .<6>md: syncing RAID array md0
Nov 4 12:50:41 tsai kernel: md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
Nov 4 12:50:41 tsai kernel: md: using maximum available idle IO bandwith (but not more than 200000 KB/sec) for reconstruction.
Nov 4 12:50:41 tsai kernel: md: using 128k window, over a total of 104320 blocks.
Nov 4 12:51:02 tsai kernel: md: md0: sync done.
Nov 4 12:51:02 tsai kernel: RAID5 conf printout:
Nov 4 12:51:02 tsai kernel: --- rd:3 wd:3 fd:0
Nov 4 12:51:02 tsai kernel: disk 0, o:1, dev:hdc12
Nov 4 12:51:02 tsai kernel: disk 1, o:1, dev:hdc15
Nov 4 12:51:02 tsai kernel: disk 2, o:1, dev:hdc14
从信息中可以看出/dev/hdc15替换成功,并已在运作。