CentOS 条带化逻辑卷性能测试

1.逻辑卷概念

逻辑卷管理器LVM(Logical Volume Manager)在硬盘分区和文件系统之间添加了一个逻辑层,提供了一个抽象的卷组,允许多块硬盘进行卷组合并,不必关注物理硬盘设备的底层架构和布局,实现对硬盘资源的动态调整。

物理卷PV(PV:Physical Volume):物理卷是底层真正提供容量,存放数据的设备,它可以是整个硬盘、硬盘上的分区等。

卷组VG(VG:Volume Group):卷组建立在物理卷之上,它由一个或多个物理卷组成。即把物理卷整合起来提供容量分配。一个LVM系统中可以只有一个卷组,也可以包含多个卷组。

逻辑卷LV(LV:Logical Volume):逻辑卷建立在卷组之上,它是从卷组中“切出”的一块空间。它是最终用户使用的逻辑设备。逻辑卷创建之后,其大小可以伸缩。

基本单元PE(PE:Physical Extents):具有唯一编号的PE是能被LVM寻址的最小单元。PE的大小可以指定,默认为4MB。PE的大小一旦确定将不能改变,同一个卷组中的所有的物理卷的PE的大小是固定的。

使用顺序:
PV->VG->LV-> 文件系统使用(挂载到某个目录)

LVM的条带化:为了性能考虑,将数据跨越多个磁盘存储,即把LV上连续的数据分成大小相同的块,然后依次存储在各个磁盘PV上,类似于RAID0的数据存放形式,实现数据读写的并发。依据的数据管理需求,定义数据分块大小,分布PV磁盘个数信息,从而实现读写性能最佳化。

一个vg中可能会包括多个pv,同样的,一个lv可能跨越多块pv,为了使硬盘存储速度加快,就会用到条带化的技术,即把连续的数据分成大小相同的数据块,然后依次存储在各个pv上。类似于RAID0,使存储速度加快。

此测试直接使用硬盘做pv,像RAID0一样危险,数据容易丢失。在正式生产环境使用中,不会像此时做测试一样没有任何保障地将多块硬盘做成一个vg,而是普遍连接的后台存储,在划分LUN之前,已经在物理硬盘上做好RAID5或RAID1,在RAID5或RAID1的基础上再划分出多块LUN,即系统上的pv,即使pv所在硬盘损坏,但有底层的硬RAID冗余,并不会丢失数据。

2.创建PV

检查已有的硬盘分区

[root@oracledb vg_bigdata]# fdisk -l|grep /dev/sd
Disk /dev/sda: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
/dev/sda1               1  4294967295  2147483647+  ee  GPT
Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
/dev/sdc1               1  4294967295  2147483647+  ee  GPT
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
/dev/sdb1               1  4294967295  2147483647+  ee  GPT
Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
/dev/sdd1               1  4294967295  2147483647+  ee  GPT

四块硬盘,空间都是在1号分区
pvcreate 创建pv
pvs 显示pv基本信息
pvdisplay 显示pv详细信息

[root@oracledb ~]# pvcreate /dev/sda1 
[root@oracledb ~]# pvcreate /dev/sdb1 
[root@oracledb ~]# pvcreate  /dev/sdc1
[root@oracledb ~]# pvcreate  /dev/sdd1
[root@oracledb ~]# pvs
  PV             VG         Fmt  Attr PSize   PFree 
  /dev/nvme0n1p2 centos     lvm2 a--  232.39g 64.00m
  /dev/sda1      vg_bigdata lvm2 a--    3.64t  3.22t
  /dev/sdb1      vg_bigdata lvm2 a--    3.64t  3.58t
  /dev/sdc1      vg_bigdata lvm2 a--    3.64t  3.58t
  /dev/sdd1      vg_bigdata lvm2 a--    3.64t  3.58t
[root@oracledb ~]# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/nvme0n1p2
  VG Name               centos
  PV Size               232.40 GiB / not usable 2.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              59493
  Free PE               16
  Allocated PE          59477
  PV UUID               d5zW7b-cH0L-aXKZ-RKuP-l6pL-D6j5-ioEPIN
   
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               vg_bigdata
  PV Size               3.64 TiB / not usable 3.80 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              953861
  Free PE               844293
  Allocated PE          109568
  PV UUID               1HT5g5-CYbk-JZId-WlRx-84Iq-8LVH-ueZTqG
   
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg_bigdata
  PV Size               3.64 TiB / not usable 3.80 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              953861
  Free PE               937477
  Allocated PE          16384
  PV UUID               b37bTm-ESIJ-AVZl-9BQx-Jufm-gY4U-rNBQ1S
   
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               vg_bigdata
  PV Size               3.64 TiB / not usable 3.80 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              953861
  Free PE               937477
  Allocated PE          16384
  PV UUID               wL1k8l-wMG3-nr1g-8nef-Gqdq-BTzv-K9eqZ2
   
  --- Physical volume ---
  PV Name               /dev/sdd1
  VG Name               vg_bigdata
  PV Size               3.64 TiB / not usable 3.80 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              953861
  Free PE               937477
  Allocated PE          16384
  PV UUID               ySIzTX-kYJk-gnqa-5jG9-woBl-YQjO-E33n5s
   
[root@oracledb ~]#

3.创建VG

把四个分区/dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1,全部加入到一个VG中。
vgcreate 创建VG
vgs 简单显示vg信息
vgdisplay 详细显示vg信息

完成后,整个VG14.55TB,默认PE4MB

PE Size 4.00 MiB
Total PE 3815444

[root@oracledb ~]# vgcreate /dev/vg_bigdata /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
[root@oracledb ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree 
  centos       1   3   0 wz--n- 232.39g 64.00m
  vg_bigdata   4   2   0 wz--n-  14.55t 13.95t
[root@oracledb ~]# vgdisplay vg_bigdata
  --- Volume group ---
  VG Name               vg_bigdata
  System ID             
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               14.55 TiB
  PE Size               4.00 MiB
  Total PE              3815444
  Alloc PE / Size       158720 / 620.00 GiB
  Free  PE / Size       3656724 / 13.95 TiB
  VG UUID               ZMnmut-K9LH-lChN-KOfN-p5Ao-LgkQ-nOXt05
   
[root@oracledb ~]#

4.创建LV

(1)创建逻辑卷

A.创建逻辑卷

lvcreate -L 256G -i4 -n lv_test_strip vg_bigdata
lvcreate -L 256G -n lv_test_linear vg_bigdata
L 逻辑卷大小
i 逻辑卷条带化节点数
n 逻辑卷名称
I 逻辑卷条带化默认块大小,默认是64kb

[root@oracledb mydata]# lvcreate -L 256G  -i4  -n lv_test_strip vg_bigdata  
  Using default stripesize 64.00 KiB.
lvcreate -L 256G -n lv_test_linear vg_bigdataWARNING: xfs signature detected on 
  Logical volume "lv_test_strip" created.
[root@oracledb mydata]# lvcreate -L 256G -n lv_test_linear vg_bigdata
  Logical volume "lv_test_linear" created.
B.条带卷信息

lvs 简单显示lv信息
lvdisplay 详细显示lv信息

[root@oracledb ~]# lvs
  LV             VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home           centos     -wi-ao---- 174.52g                                                    
  root           centos     -wi-ao----  50.00g                                                    
  swap           centos     -wi-ao----   7.81g                                                    
  lv_test_linear vg_bigdata -wi-ao---- 364.00g                                                    
  lv_test_strip  vg_bigdata -wi-ao---- 256.00g

lvdisplay -m显示详细信息
lvdisplay -m /dev/vg_bigdata/lv_test_linear
在段信息中,类型是线性

— Segments —
Logical extents 0 to 1999:
Type linear

[root@oracledb ~]# lvdisplay -m /dev/vg_bigdata/lv_test_linear
  --- Logical volume ---
  LV Path                /dev/vg_bigdata/lv_test_linear
  LV Name                lv_test_linear
  VG Name                vg_bigdata
  LV UUID                BEEkFd-aRwH-1eWS-nt3O-yErS-P3vT-JKunm0
  LV Write Access        read/write
  LV Creation host, time oracledb, 2023-02-01 08:16:40 +0800
  LV Status              available
  # open                 1
  LV Size                364.00 GiB
  Current LE             93184
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
   
  --- Segments ---
  Logical extents 0 to 93183:
    Type                linear
    Physical volume     /dev/sda1
    Physical extents    16384 to 109567
     
[root@oracledb ~]#

条带卷的卷信息,类型是条带化,一共4个条带,分布在4个pv上。

— Segments —
Logical extents 0 to 65535:
Type striped

[root@oracledb ~]# lvdisplay -m /dev/vg_bigdata/lv_test_strip
  --- Logical volume ---
  LV Path                /dev/vg_bigdata/lv_test_strip
  LV Name                lv_test_strip
  VG Name                vg_bigdata
  LV UUID                3cwPC3-DrFV-dhjN-gVZ6-MKVU-m3QC-oh8CmX
  LV Write Access        read/write
  LV Creation host, time oracledb, 2023-02-01 08:14:43 +0800
  LV Status              available
  # open                 1
  LV Size                256.00 GiB
  Current LE             65536
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:3
   
  --- Segments ---
  Logical extents 0 to 65535:
    Type                striped
    Stripes             4
    Stripe size         64.00 KiB
    Stripe 0:
      Physical volume   /dev/sda1
      Physical extents  0 to 16383
    Stripe 1:
      Physical volume   /dev/sdb1
      Physical extents  0 to 16383
    Stripe 2:
      Physical volume   /dev/sdc1
      Physical extents  0 to 16383
    Stripe 3:
      Physical volume   /dev/sdd1
      Physical extents  0 to 16383
   
   
[root@oracledb ~]#
C.逻辑卷位置

在dev目录下,有vg_bigdata目录,目录下是vg上的两个lv的连接,
连接到两个块文件dev目录下的dm-3 dm-4上。

[root@oracledb ~]# cd /dev/vg_bigdata/
[root@oracledb vg_bigdata]# ll
total 0
lrwxrwxrwx 1 root root 7 Feb  1 08:57 lv_test_linear -> ../dm-4
lrwxrwxrwx 1 root root 7 Feb  1 08:17 lv_test_strip -> ../dm-3

[root@oracledb dev]# ll -h m*
crw------- 1 root root 10, 227 Jan 31 13:26 mcelog
crw-r----- 1 root kmem  1,   1 Jan 31 13:26 mem

mapper:
total 0
lrwxrwxrwx 1 root root       7 Jan 31 13:26 centos-home -> ../dm-2
lrwxrwxrwx 1 root root       7 Jan 31 13:26 centos-root -> ../dm-0
lrwxrwxrwx 1 root root       7 Jan 31 13:26 centos-swap -> ../dm-1
crw------- 1 root root 10, 236 Jan 31 13:26 control
lrwxrwxrwx 1 root root       7 Feb  1 08:57 vg_bigdata-lv_test_linear -> ../dm-4
lrwxrwxrwx 1 root root       7 Feb  1 08:17 vg_bigdata-lv_test_strip -> ../dm-3

mqueue:
total 0
[root@oracledb dev]# 
[root@oracledb dev]# ll -h dm*
brw-rw---- 1 root disk 253, 0 Jan 31 13:26 dm-0
brw-rw---- 1 root disk 253, 1 Jan 31 13:26 dm-1
brw-rw---- 1 root disk 253, 2 Jan 31 13:26 dm-2
brw-rw---- 1 root disk 253, 3 Feb  1 08:17 dm-3
brw-rw---- 1 root disk 253, 4 Feb  1 08:57 dm-4
[root@oracledb dev]#

(2)格式化

mkfs.xfs /dev/vg_bigdata/lv_test_linear
mkfs.xfs /dev/vg_bigdata/lv_test_strip

[root@oracledb mydata]# mkfs.xfs /dev/vg_bigdata/lv_test_linear
meta-data=/dev/vg_bigdata/lv_test_linear isize=256    agcount=4, agsize=16777216 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=67108864, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=32768, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@oracledb mydata]# mkfs.xfs /dev/vg_bigdata/lv_test_strip
meta-data=/dev/vg_bigdata/lv_test_strip isize=256    agcount=16, agsize=4194288 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=67108608, imaxpct=25
         =                       sunit=16     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=32767, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

(3)挂载文件系统

mount /dev/vg_bigdata/lv_test_linear /mydata/linear
mount /dev/vg_bigdata/lv_test_strip /mydata/strip

[root@oracledb mydata]# mount /dev/vg_bigdata/lv_test_linear /mydata/linear
[root@oracledb mydata]# mount /dev/vg_bigdata/lv_test_strip /mydata/strip
[root@oracledb mydata]#

(4)开机自动挂载

挂载完成后,通过df -h 就能看到相应的文件系统

[root@oracledb ~]# df -h
Filesystem                             Size  Used Avail Use% Mounted on
/dev/mapper/centos-root                 50G   16G   35G  31% /
devtmpfs                               7.8G     0  7.8G   0% /dev
tmpfs                                  7.8G  4.1G  3.8G  53% /dev/shm
tmpfs                                  7.8G  9.3M  7.8G   1% /run
tmpfs                                  7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/nvme0n1p1                         497M  141M  357M  29% /boot
/dev/mapper/centos-home                175G  3.9G  171G   3% /home
tmpfs                                  1.6G     0  1.6G   0% /run/user/1001
tmpfs                                  1.6G   16K  1.6G   1% /run/user/42
tmpfs                                  1.6G     0  1.6G   0% /run/user/0
/dev/mapper/vg_bigdata-lv_test_linear  364G   17G  348G   5% /mydata/linear
/dev/mapper/vg_bigdata-lv_test_strip   256G   17G  240G   7% /mydata/strip

将挂载信息写入到/etc/fstab中,使其开机挂载

echo "/dev/vg_bigdata/lv_test_linear  /mydata/linear xfs  defaults  0  0" >> /etc/fstab
 
echo "/dev/vg_bigdata/lv_test_strip /mydata/strip xfs  defaults  0  0" >> /etc/fstab

(5)逻辑卷管理

特别注意:
resize2fs命令 针对的是ext2、ext3、ext4文件系统
xfs_growfs命令 针对的是xfs文件系统

A. 调整逻辑分区大小

增大至300G,xfs文件系统扩展
lvextend -L 300G /dev/vg_bigdata/lv_test_linear
xfs_growfs /dev/vg_bigdata/lv_test_linear

[root@oracledb vg_bigdata]#  lvextend -L 300G /dev/vg_bigdata/lv_test_linear
  Size of logical volume vg_bigdata/lv_test_linear changed from 256.00 GiB (65536 extents) to 300.00 GiB (76800 extents).
  Logical volume lv_test_linear successfully resized.

[root@oracledb vg_bigdata]#  xfs_growfs /dev/vg_bigdata/lv_test_linear
meta-data=/dev/mapper/vg_bigdata-lv_test_linear isize=256    agcount=4, agsize=16777216 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=67108864, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 67108864 to 78643200
B. 增加逻辑卷空间

逻辑卷增加64G
lvextend -L +64G /dev/vg_bigdata/lv_test_linear
xfs文件系统扩容
xfs_growfs /dev/vg_bigdata/lv_test_linear

[root@oracledb vg_bigdata]#  lvextend -L +64G /dev/vg_bigdata/lv_test_linear
  Size of logical volume vg_bigdata/lv_test_linear changed from 300.00 GiB (76800 extents) to 364.00 GiB (93184 extents).
  Logical volume lv_test_linear successfully resized.
[root@oracledb vg_bigdata]# 
[root@oracledb vg_bigdata]#  xfs_growfs /dev/vg_bigdata/lv_test_linear
meta-data=/dev/mapper/vg_bigdata-lv_test_linear isize=256    agcount=5, agsize=16777216 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=78643200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 78643200 to 95420416
[root@oracledb vg_bigdata]#
C. 逻辑卷删除

先卸载挂载点,然后删除逻辑卷 lvremove 。

[root@oracledb mydata]# umount /mydata/linear/
[root@oracledb mydata]# umount /mydata/strip/
[root@oracledb mydata]#  lvremove /dev/vg_bigdata/lv_test_linear 
Do you really want to remove active logical volume lv_test_linear? [y/n]: y
  Logical volume "lv_test_linear" successfully removed
[root@oracledb mydata]# lvremove /dev/vg_bigdata/lv_test_strip 
Do you really want to remove active logical volume lv_test_strip? [y/n]: y
  Logical volume "lv_test_strip" successfully removed

特别注意:
任何文件系统类型,在进行缩容前,都需要先卸载挂载,然后再缩容,再重新挂载。

xfs格式不是不支持缩容,而是xfs格式缩容后需要重新格式化才能继续挂载使用,且缩容前需要先卸载挂载,然后在缩容,然后格式化,然后再挂载。这会导致原本的数据丢失,如果实在要缩容,可以提前将数据转移备份好,缩容后再将数据迁移回来。

5.性能测试

(1)普通逻辑卷

创建一个16G的文件,查看写入速度,耗时78.7051 s,速度 218 MB/s

[root@oracledb mydata]# dd if=/dev/zero of=/mydata/linear/1G bs=4M count=4096
4096+0 records in
4096+0 records out
17179869184 bytes (17 GB) copied, 78.7051 s, 218 MB/s

在另一个窗口监控io情况,1秒钟刷新,只有一个硬盘有写入数据
iostat 1

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.75   10.54    0.00   88.71

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
nvme0n1           1.00         4.00         0.00          4          0
sda             376.00         0.00    191496.00          0     191496
sdc               0.00         0.00         0.00          0          0
sdb               0.00         0.00         0.00          0          0
sdd               0.00         0.00         0.00          0          0
dm-0              1.00         4.00         0.00          4          0
dm-1              0.00         0.00         0.00          0          0
dm-2              0.00         0.00         0.00          0          0

(2)条带卷

同样创建一个16G的文件,查看写入速度,耗时23.0527 s,速度 745 MB/s ,耗时和IO都是普通逻辑卷的3倍多

[root@oracledb mydata]# dd if=/dev/zero of=/mydata/strip/1G bs=4M count=4096
4096+0 records in
4096+0 records out
17179869184 bytes (17 GB) copied, 23.0527 s, 745 MB/s

iostat 监控四个硬盘并行写入!!!

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    7.81   15.74    0.00   76.45

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
nvme0n1          14.00        32.00        51.00         32         51
sda             335.00         0.00    171520.00          0     171520
sdc             335.00         0.00    171520.00          0     171520
sdb             348.00         0.00    178176.00          0     178176
sdd             318.00         0.00    161216.00          0     161216
dm-0             13.00        32.00        51.00         32         51
dm-1              0.00         0.00         0.00          0          0
dm-2              0.00         0.00         0.00          0          0

(3)对比普通卷和条带卷

可以看到普通逻辑卷就是在sda1上,条带卷分布在4个硬盘上。
普通逻辑卷在sda1,是顺序应用vg空间,空间满了才扩展到sdb1上。

[root@oracledb strip]# lsblk
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                             8:0    0   3.7T  0 disk 
└─sda1                          8:1    0   3.7T  0 part 
  ├─vg_bigdata-lv_test_strip  253:3    0   256G  0 lvm  /mydata/strip
  └─vg_bigdata-lv_test_linear 253:4    0   256G  0 lvm  /mydata/linear
sdb                             8:16   0   3.7T  0 disk 
└─sdb1                          8:17   0   3.7T  0 part 
  └─vg_bigdata-lv_test_strip  253:3    0   256G  0 lvm  /mydata/strip
sdc                             8:32   0   3.7T  0 disk 
└─sdc1                          8:33   0   3.7T  0 part 
  └─vg_bigdata-lv_test_strip  253:3    0   256G  0 lvm  /mydata/strip
sdd                             8:48   0   3.7T  0 disk 
└─sdd1                          8:49   0   3.7T  0 part 
  └─vg_bigdata-lv_test_strip  253:3    0   256G  0 lvm  /mydata/strip
nvme0n1                       259:0    0 232.9G  0 disk 
├─nvme0n1p1                   259:1    0   500M  0 part /boot
└─nvme0n1p2                   259:2    0 232.4G  0 part 
  ├─centos-root               253:0    0    50G  0 lvm  /
  ├─centos-swap               253:1    0   7.8G  0 lvm  [SWAP]
  └─centos-home               253:2    0 174.5G  0 lvm  /home