SUSE Linux+存储阵列目前是主流应用,根据自己的工程经验,以下总结下suse 服务器下如何使用LVM管理软件进行阵列配置。
在SUSE服务器对阵列进行配置一般有两种模式:
1,直接使用fdisk +外挂存储设备名,对其进行分区划分。使用fdisk划分出来的分区最大为2T,适合阵列容量不是太大情况下使用;
2,使用LVM管理软件:
LVM是逻辑盘卷管理的简称(logic volume manager),它是liunx下对磁盘分区进行管理一种机制。LVM是建立在硬盘和分区之间的逻辑层,可以提高磁盘分区管理的灵活性。管理员通过LVM可以轻松管理磁盘分区,是目前linux下使用最广泛的工具。
LVM可以使用PV,LV,VG等命令来进行大容量阵列分区,管理方便,并且扩容阵列方便,增加了系统的扩展性,所以优先考虑。
以下是在实际开局工程中LVM下配置阵列全步骤:
1,执行fdisk –l命令检查服务器是否正确识别到了存储设备,一般存储的容量会远大于本地硬盘的容量。
cspfstest:~ # fdisk -l
Disk /dev/sda: 146.6 GB, 146693685248 bytes
255 heads, 63 sectors/track, 17834 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 1045 8393931 82 Linux swap / Solaris
/dev/sda2 * 1046 11489 83891430 83 Linux
/dev/sda3 11490 17834 50966212+ f W95 Ext'd (LBA)
/dev/sda5 11490 16711 41945683+ 83 Linux
Disk /dev/sdb: 1799.5 GB, 1799589199872 bytes
255 heads, 63 sectors/track, 218787 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
此处/dev/sdb就是外挂的存储阵列
2,创建物理硬盘分区
a,创建分区
cspfstest:~ # fdisk /dev/sdb
The number of cylinders for this disk is set to 218787.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
b,输入n 创建新分区
c,输入P创建主分区
Command (m for help): n Command action
e extended
p primary partition (1-4)
Partition number (1-4): 1
First cylinder (1-218787, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-218787, default 218787): +1000000M
--此处创建了一个1T的分区
d,在“Command (m for help)”提示行下输入t修改分区类型,修改为LVM分区,创建完成后输入w保存
Command (m for help): p
Disk /dev/sdb: 1799.5 GB, 1799589199872 bytes
255 heads, 63 sectors/track, 218787 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 121577 976567221 8e Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
3,创建PV、VG和LV
a,创建PV:
cspfstest:~ # pvcreate /dev/sdb1
PV成功创建标志:
Physical volume "/dev/sdb1" successfully created
创建完成可以使用pvdisplay进行查看
b,创建VG:
cspfstest:~ # vgcreate vgfs /dev/sdb1 /dev/sdb2 -s 128M
VG成功创建标志:
Volume group "vgfs" successfully created
创建完成可以使用vgdisplay进行查看
c,创建LV:
cspfstest:~ # lvcreate -l 65000 -n lvfs1 vgfs
LV成功创建标志:
lvcreate -l 65000 -n lvfs2 vgfs Insufficient free extents (13407) in volume group vgfs: 65000 required
4,创建ext3的文件系统
在LV上进行文件系统创建:
cspfstest:/dev/mapper # mkfs.ext3 /dev/mapper/vgfs-lvfs2
文件系统开始创建:
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
106496000 inodes, 212992000 blocks
10649600 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
6500 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
5,显示分区情况,此处的/dev/mapper/vgfs-lvfs1就是新增的文件系统
cspfstest:/ # df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 83888824 3145364 80743460 4% /
udev 4048104 184 4047920 1% /dev
/dev/sda5 41944376 32948 41911428 1% /home
/dev/mapper/vgfs-lvfs1
838600256 201288 795800568 1% /fileserver1