文章目录
- 一、FIO是什么?
- 一、FIO 安装与使用
- 2.1 FIO 安装
- 2.2 FIO 使用
- 二. rpm命令安装gcc
- time+dd 测磁盘读写速度
一、FIO是什么?
目前主流的第三方IO测试工具有fio、iometer和Orion,这三种工具各有千秋。
fio在Linux系统下使用比较方便fio 是一个 I/O 工具用来对硬件进行压力测试和验证,支持13种不同的I/O引擎,包括:sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio 等等, I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, 等等。
一、FIO 安装与使用
2.1 FIO 安装
代码如下
wget http://brick.kernel.dk/snaps/fio-2.2.5.tar.gz
tar zxvf fio-2.2.5.tar.gz
cd fio-2.2.5
./configure
make
make install
2.2 FIO 使用
硬盘的读写方式对性能指标应很大,考察硬盘的性能,可以从以下五个场景来测试:连续读,随机读,顺序写,随机写,混合读写
列出块设备信息 lsblk -mp
测试
//以下命令本人使用了换行,使用时请去掉换行
#连续读 fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=20G
-numjobs=30 -runtime=120 -group_reporting -name=mytest
#随机读 fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=20G
-numjobs=30 -runtime=120 -group_reporting -name=mytest
#顺序写 fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=20G
-numjobs=30 -runtime=60 -group_reporting -name=mytest
#随机写 fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=20G
-numjobs=10 -runtime=60 -group_reporting -name=mytest
#混合读写 fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync
-bs=16k -size=20G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop
说明:
filename=/dev/sdb1 测试文件名称,通常选择需要测试的盘的data目录。
direct=1 测试过程绕过机器自带的buffer。使测试结果更真实。
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机写和读的I/O
bs=16k 单次io的块文件大小为16k
bsrange=512-2048 同上,提定数据块的大小范围
size=5g 本次的测试文件大小为5g,以每次4k的io进行测试。
numjobs=30 本次的测试线程为30.
runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止。
ioengine=psync io引擎使用pync方式
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting 关于显示结果的,汇总每个进程的信息。
此外
lockmem=1g 只使用1g内存进行测试。
zero_buffers 用0初始化系统buffer。
nrfiles=8 每个进程生成文件的数量。
结果分析:
io总的输入输出量
bw:带宽 KB/s
iops:每秒钟的IO数
runt:总运行时间
lat (msec):延迟(毫秒)
msec: 毫秒
usec: 微秒
运行结果如下:
//#连续读
[root@localhost dev]# fio -filename=/dev/sda -direct=1
-iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=20G
-numjobs=30 -runtime=120 -group_reporting -name=mytest
mytest: (g=0): rw=read, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.5
Starting 30 threads
Jobs: 30 (f=30): [R(30)] [100.0% done] [299.7MB/0KB/0KB /s] [19.2K/0/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=13013: Wed Nov 18 00:51:32 2020
read : io=19617MB, bw=167395KB/s, iops=10462, runt=120002msec
clat (usec): min=87, max=30247, avg=2860.29, stdev=1520.73
lat (usec): min=88, max=30249, avg=2861.91, stdev=1520.84
clat percentiles (usec):
| 1.00th=[ 772], 5.00th=[ 948], 10.00th=[ 1288], 20.00th=[ 1480],
| 30.00th=[ 1528], 40.00th=[ 1624], 50.00th=[ 2608], 60.00th=[ 4080],
| 70.00th=[ 4192], 80.00th=[ 4256], 90.00th=[ 4320], 95.00th=[ 4384],
| 99.00th=[ 4576], 99.50th=[ 6176], 99.90th=[13248], 99.95th=[17536],
| 99.99th=[25472]
bw (KB /s): min= 3392, max=16224, per=3.34%, avg=5584.70, stdev=3100.08
lat (usec) : 100=0.01%, 250=0.04%, 500=0.20%, 750=0.61%, 1000=4.92%
lat (msec) : 2=40.85%, 4=10.25%, 10=42.94%, 20=0.15%, 50=0.03%
cpu : usr=0.46%, sys=1.22%, ctx=1255559, majf=0, minf=129
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=1255481/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=19617MB, aggrb=167394KB/s, minb=167394KB/s, maxb=167394KB/s, mint=120002msec, maxt=120002msec
Disk stats (read/write):
sda: ios=1252848/66, merge=0/6, ticks=3330478/516, in_queue=3331066, util=99.86%
//#随机读
[root@localhost dev]# fio -filename=/dev/sda -direct=1
-iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k
-size=20G -numjobs=30 -runtime=120 -group_reporting -name=mytest
mytest: (g=0): rw=randread, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.5
Starting 30 threads
Jobs: 30 (f=30): [r(30)] [100.0% done] [243.5MB/0KB/0KB /s] [15.6K/0/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=13566: Wed Nov 18 01:02:32 2020
read : io=28519MB, bw=243317KB/s, iops=15207, runt=120024msec
clat (usec): min=64, max=298698, avg=1964.56, stdev=5192.59
lat (usec): min=65, max=298699, avg=1966.25, stdev=5192.59
clat percentiles (usec):
| 1.00th=[ 183], 5.00th=[ 262], 10.00th=[ 314], 20.00th=[ 402],
| 30.00th=[ 478], 40.00th=[ 556], 50.00th=[ 636], 60.00th=[ 724],
| 70.00th=[ 828], 80.00th=[ 1004], 90.00th=[ 5344], 95.00th=[10048],
| 99.00th=[22400], 99.50th=[29824], 99.90th=[57088], 99.95th=[77312],
| 99.99th=[138240]
bw (KB /s): min= 1526, max=12952, per=3.34%, avg=8132.55, stdev=1505.70
lat (usec) : 100=0.01%, 250=4.12%, 500=28.53%, 750=30.43%, 1000=16.71%
lat (msec) : 2=7.27%, 4=1.51%, 10=6.40%, 20=3.73%, 50=1.15%
lat (msec) : 100=0.11%, 250=0.03%, 500=0.01%
cpu : usr=0.69%, sys=1.72%, ctx=1825268, majf=0, minf=137
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=1825239/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=28519MB, aggrb=243316KB/s, minb=243316KB/s, maxb=243316KB/s, mint=120024msec, maxt=120024msec
Disk stats (read/write):
sda: ios=1822310/40, merge=0/4, ticks=3357926/142, in_queue=3356575, util=100.00%
//#顺序写
[root@localhost dev]# fio -filename=/dev/sda -direct=1
-iodepth 1 -thread -rw=write -ioengine=psync -bs=16k
-size=20G -numjobs=30 -runtime=60 -group_reporting -name=mytest
mytest: (g=0): rw=write, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.5
Starting 30 threads
Jobs: 30 (f=30): [W(30)] [100.0% done] [0KB/91940KB/0KB /s] [0/5746/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=13951: Wed Nov 18 01:08:02 2020
write: io=4222.2MB, bw=72049KB/s, iops=4503, runt= 60006msec
clat (usec): min=852, max=7910.4K, avg=6653.17, stdev=83576.94
lat (usec): min=854, max=7910.4K, avg=6655.55, stdev=83576.94
clat percentiles (usec):
| 1.00th=[ 1528], 5.00th=[ 2040], 10.00th=[ 2416], 20.00th=[ 3024],
| 30.00th=[ 3504], 40.00th=[ 3952], 50.00th=[ 4448], 60.00th=[ 5152],
| 70.00th=[ 6048], 80.00th=[ 7520], 90.00th=[10304], 95.00th=[13888],
| 99.00th=[20352], 99.50th=[23168], 99.90th=[43264], 99.95th=[122368],
| 99.99th=[7897088]
bw (KB /s): min= 18, max= 3800, per=3.84%, avg=2767.85, stdev=740.79
lat (usec) : 1000=0.01%
lat (msec) : 2=4.57%, 4=36.36%, 10=48.56%, 20=9.37%, 50=1.03%
lat (msec) : 100=0.03%, 250=0.03%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : >=2000=0.01%
cpu : usr=0.24%, sys=0.56%, ctx=270313, majf=0, minf=8
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=270209/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=4222.2MB, aggrb=72048KB/s, minb=72048KB/s, maxb=72048KB/s, mint=60006msec, maxt=60006msec
Disk stats (read/write):
sda: ios=65/270047, merge=0/4, ticks=117/1777901, in_queue=1778200, util=100.00%
//#随机写
[root@localhost dev]# fio -filename=/dev/sda -direct=1
-iodepth 1 -thread -rw=randwrite -ioengine=psync
-bs=16k -size=20G -numjobs=10 -runtime=60
-group_reporting -name=mytest
mytest: (g=0): rw=randwrite, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.5
Starting 10 threads
Jobs: 10 (f=10): [w(10)] [100.0% done] [0KB/25974KB/0KB /s] [0/1623/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=10): err= 0: pid=15990: Wed Nov 18 01:43:07 2020
write: io=2237.6MB, bw=38176KB/s, iops=2386, runt= 60017msec
clat (usec): min=862, max=294808, avg=4181.23, stdev=6758.59
lat (usec): min=865, max=294810, avg=4183.71, stdev=6758.60
clat percentiles (usec):
| 1.00th=[ 1416], 5.00th=[ 1688], 10.00th=[ 1880], 20.00th=[ 2128],
| 30.00th=[ 2352], 40.00th=[ 2576], 50.00th=[ 2832], 60.00th=[ 3184],
| 70.00th=[ 3824], 80.00th=[ 4768], 90.00th=[ 6432], 95.00th=[ 9408],
| 99.00th=[23680], 99.50th=[38144], 99.90th=[97792], 99.95th=[140288],
| 99.99th=[205824]
bw (KB /s): min= 278, max= 6902, per=10.08%, avg=3849.34, stdev=1481.86
lat (usec) : 1000=0.01%
lat (msec) : 2=14.80%, 4=57.46%, 10=23.25%, 20=3.14%, 50=1.02%
lat (msec) : 100=0.23%, 250=0.09%, 500=0.01%
cpu : usr=0.42%, sys=1.03%, ctx=143211, majf=0, minf=13
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=143202/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=2237.6MB, aggrb=38176KB/s, minb=38176KB/s, maxb=38176KB/s, mint=60017msec, maxt=60017msec
Disk stats (read/write):
sda: ios=0/143202, merge=0/1, ticks=0/590863, in_queue=590806, util=99.95%
//#随机读写
[root@localhost dev]# fio -filename=/dev/sda -direct=1
-iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync
-bs=16k -size=20G -numjobs=30 -runtime=100 -group_reporting
-name=mytest -ioscheduler=noop
mytest: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.5
Starting 30 threads
Jobs: 30 (f=30): [m(30)] [100.0% done] [52139KB/20347KB/0KB /s] [3258/1271/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=14932: Wed Nov 18 01:25:11 2020
read : io=5176.4MB, bw=52996KB/s, iops=3312, runt=100012msec
clat (usec): min=159, max=803531, avg=5256.48, stdev=16498.05
lat (usec): min=160, max=803532, avg=5258.23, stdev=16498.05
clat percentiles (usec):
| 1.00th=[ 540], 5.00th=[ 652], 10.00th=[ 732], 20.00th=[ 868],
| 30.00th=[ 1004], 40.00th=[ 1176], 50.00th=[ 1448], 60.00th=[ 1864],
| 70.00th=[ 2416], 80.00th=[ 4768], 90.00th=[10432], 95.00th=[19584],
| 99.00th=[66048], 99.50th=[102912], 99.90th=[230400], 99.95th=[296960],
| 99.99th=[440320]
bw (KB /s): min= 52, max= 4215, per=3.37%, avg=1785.74, stdev=606.82
write: io=2203.0MB, bw=22556KB/s, iops=1409, runt=100012msec
clat (msec): min=1, max=304, avg= 8.90, stdev= 9.28
lat (msec): min=1, max=304, avg= 8.90, stdev= 9.28
clat percentiles (usec):
| 1.00th=[ 1816], 5.00th=[ 2288], 10.00th=[ 2768], 20.00th=[ 4128],
| 30.00th=[ 5216], 40.00th=[ 6048], 50.00th=[ 6880], 60.00th=[ 7840],
| 70.00th=[ 9152], 80.00th=[11072], 90.00th=[15808], 95.00th=[22144],
| 99.00th=[42240], 99.50th=[56064], 99.90th=[119296], 99.95th=[156672],
| 99.99th=[195584]
bw (KB /s): min= 26, max= 1902, per=3.37%, avg=760.42, stdev=265.24
lat (usec) : 250=0.01%, 500=0.30%, 750=7.45%, 1000=13.07%
lat (msec) : 2=23.78%, 4=15.92%, 10=24.68%, 20=9.52%, 50=4.00%
lat (msec) : 100=0.86%, 250=0.35%, 500=0.05%, 750=0.01%, 1000=0.01%
cpu : usr=0.25%, sys=0.62%, ctx=472296, majf=0, minf=13
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=331266/w=140992/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=5176.4MB, aggrb=52996KB/s, minb=52996KB/s, maxb=52996KB/s, mint=100012msec, maxt=100012msec
WRITE: io=2203.0MB, aggrb=22556KB/s, minb=22556KB/s, maxb=22556KB/s, mint=100012msec, maxt=100012msec
Disk stats (read/write):
sda: ios=330691/140789, merge=0/7, ticks=1713487/1239467, in_queue=2954157, util=100.00%
[root@localhost dev]#
二. rpm命令安装gcc
提示:CentOS7(64)环境无外网下,./configure 时缺少依赖包,网上各种gcc的tar包安装教程没试成功。编译依然报错如下:
“configure: error: in /opt/gcc-8.4.0':configure: error: no acceptable C compiler found in $PATH
”
特用rpm命令安装。
下载地址: http://vault.centos.org/7.0.1406/os/x86_64/Packages/ 下载以下文件:
cpp-4.8.2-16.el7.x86_64.rpm
gcc-4.8.2-16.el7.x86_64.rpm
glibc-2.17-55.el7.x86_64.rpm
glibc-common-2.17-55.el7.x86_64.rpm
glibc-devel-2.17-55.el7.x86_64.rpm
glibc-headers-2.17-55.el7.x86_64.rpm
glibc-static-2.17-55.el7.x86_64.rpm
glibc-utils-2.17-55.el7.x86_64.rpm
kernel-headers-3.10.0-123.el7.x86_64.rpm
libmpc-1.0.1-3.el7.x86_64.rpm
mpfr-3.1.1-4.el7.x86_64.rpm
执行安装
mkdir rpm_tmp
cd rpm_tmp
rpm -Uvh *.rpm --nodeps --force
查看状态:gcc -V
time+dd 测磁盘读写速度
先熟悉两个特殊的设备及一些相关参数:
- time有计时作用,dd用于复制,从if读出,写到of;
- if=/dev/zero(产生字符)不产生IO,因此可以用来测试纯写速度;
- 同理of=/dev/null(回收站、无底洞)不产生IO,可以用来测试纯读速度;
- 将/tmp/test拷贝到/var则同时测试了读写速度;
- bs是每次读或写的大小,即一个块的大小,count是读写块的数量。
当写入到驱动盘的时候,我们简单的从无穷无用字节的源 /dev/zero 读取,当从驱动盘读取的时候,我们读取的是刚才的文件,并把输出结果发送到无用的 /dev/null。在整个操作过程中, DD 命令会跟踪数据传输的速度并且报告出结果。
2.测试磁盘写能力(写速度)
time dd if=/dev/zero of=test.dbf bs=8k count=300000
其中/dev/zero是一个伪设备,它只产生空字符流,对它不会产生IO,所以,IO都会集中在of文件中,of文件只用于写,所以这个命令相当于测试磁盘的写能力,上面为每一块大小是8k,共有300000块。
输出的结果类似(因为一般更长测试时间更准确,所以可以设置count大一些):
300000+0 records in
300000+0 records outreal 0m36.669s
user 0m0.185s
sys 0m9.340s
所以写速度为:8*300000/1024/36.669=63.916M/s
3、测试磁盘读能力(读速度)
time dd if=/dev/sda1 of=/dev/null bs=8k
因为/dev/sdb1是一个物理分区,对它的读取会产生IO,/dev/null是伪设备,相当于黑洞,of到该设备不会产生IO,所以,这个命令的IO只发生在/dev/sdb1上,也相当于测试磁盘的读能力。(Ctrl+c终止测试)
输出的结果类似:
448494+0 records in
448494+0 records outreal 0m51.070s
user 0m0.054s
sys 0m10.028s
所以sda1上的读取速度为:8*448494/1024/51.070=68.61M/s
上面指的是从if的路径写到of的路径,或者从if的路径输出到of的路径
bs的命令使用形式是:bs=xxx count=mmm
含义:bs=600 count=1,备份第一块为600个字节的区域.(默认大小为512个字节)
bs=512 count=2,备份前2块总共为1024个字节的区域
dd的输出是:
x+y records in
m+n records out
其中 x和m的含义是 x和m个完整的块(也就是你用bs指定的块大小)被读入和写出。
其中y和n的含义是 y和n个不完整的块(部分块)被读入和写出。
测试同时读写能力
time dd if=/dev/sdb of=/testrw.dbf bs=4k
在这个命令下,一个是物理分区,一个是实际的文件,对它们的读写都会产生IO(对/dev/sdb是读,对/testrw.dbf是写),假设它们都在一个磁盘中,这个命令就相当于测试磁盘的同时读写能力
Router上进行读写测试脚本:
#!/bin/sh rm /mnt/mmcblk0p1/ddtest 'dev下面的mmcblk0p1的设备,mnt下面的mmcblk0p1是个目录,才可以读写 mytime=1 while [ 1 ] do
echo “=NO."$mytime" TEST=” time dd if=/dev/zero
of=/mnt/mmcblk0p1/ddtest bs=8k count=10240
’mmcblk0p1下创建ddtest文件夹,往文件夹里写入数据 sleep 5 rm /mnt/mmcblk0p1/ddtest
sleep 3 echo “” let mytime=mytime+1 done
写一块是8k 一共写了10240块 数据一共是 8k*10240 = 80M
dd 只能提供一个大概的测试结果,而且是连续 I/O 而不是随机 I/O,理论上文件规模越大,测试结果越准确。 同时,iflag/oflag 提供 direct 模式,direct 模式是把写入请求直接封装成 I/O 指令发到磁盘,非 direct 模式只是把数据写入到系统缓存就认为 I/O 成功,并由操作系统决定缓存中的数据什么时候被写入磁盘。