基于阿里云ECS 的云盘管理及性能测试
Posted 一图三论
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了基于阿里云ECS 的云盘管理及性能测试相关的知识,希望对你有一定的参考价值。
1. 环境:
2.增加20G 数据盘
衡量块存储产品的性能指标主要包括IOPS、吞吐量和访问时延。
约束条件
进入操作系统挂载云盘
查看磁盘是否挂载:fdisk -l
Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x319ba3a3
Device Boot Start End Sectors Size Id Type
/dev/vda1 * 2048 83886046 83883999 40G 83 Linux
Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
size (minimum/optimal): 512 bytes / 512 bytes
对磁盘进行分区,格式化
fdisk -u /dev/vdb #创建磁盘分区 注意保存:w
fdisk -lu /dev/vdb #查看新分区信息
mkfs -t xfs /dev/vdb1 #格式化文件系统为xfs
mkfs -t ext4 /dev/vdb1
#建立永久挂载点,防止重启失效
echo `blkid /dev/vdb1 | awk '{print $2}' | sed 's/\"//g'` /mnt xfs defaults 0 0 >> /etc/fstab
mount /dev/vdb1 /mnt #进行分区挂载
挂载校验
mnt]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 480K 1.8G 1% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
40G 4.5G 36G 12% /
tmpfs 364M 0 364M 0% /run/user/0
20G 175M 20G 1% /mnt
磁盘性能测试
不同工具测试的硬盘基准性能会有差异,如dd
、sysbench
、iometer
等工具可能会受到测试参数配置和文件系统影响,难以反映真实性能。本示例的性能参数,均为Linux系统下采用FIO
工具的测试结果,以此作为块存储产品性能指标参考。
yum install libaio -y
yum install libaio-devel -y
yum install fio -y
cd /tmp
命令释义:
fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=15G -numjobs=20 -runtime=60 -group_reporting -name=mytest
filename=/dev/sdb1 # 测试文件名称,通常选择需要测试的盘的data目录。
direct=1 # 测试过程绕过机器自带的buffer。使测试结果更真实。
rw=randwrite # 混合随机读写
rw=randrw # 测试随机写和读的I/O
bs=16k # 单次io的块文件大小为16k
bsrange=512-2048 # 同上,提定数据块的大小范围
size=5g # 本次的测试文件大小为5g,以每次4k的io进行测试。
numjobs=30 # 本次的测试线程为30.
runtime=1000 # 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止。
ioengine=psync io # 引擎使用pync方式
rwmixwrite=30 # 在混合读写的模式下,写占30%
group_reporting # 关于显示结果的,汇总每个进程的信息。
lockmem=1g # 只使用1g内存进行测试。
zero_buffers # 用0初始化系统buffer。
nrfiles=8 # 每个进程生成文件的数量。
随机写测试
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Write_Testing
阿里用ECS服务器高速云盘
Rand_Write_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.19
Starting 1 process
Rand_Write_Testing: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [w(1)][78.3%][w=8003KiB/s][w=2000 IOPS][eta 00m:28s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=1738: Thu Jan 28 17:33:55 2021
write: IOPS=2046, BW=8186KiB/s (8382kB/s)(800MiB/100063msec); 0 zone resets
slat (usec): min=2, max=119260, avg=14.52, stdev=705.72
clat (usec): min=439, max=210335, avg=62530.03, stdev=11846.82
lat (usec): min=449, max=210338, avg=62544.66, stdev=11831.16
clat percentiles (msec):
1.00th=[ 23], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 61],
30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63],
70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 65], 95.00th=[ 73],
99.00th=[ 118], 99.50th=[ 131], 99.90th=[ 171], 99.95th=[ 178],
99.99th=[ 192]
bw ( KiB/s): min= 6256, max= 9232, per=100.00%, avg=8196.71, stdev=470.03, samples=199
iops : min= 1564, max= 2308, avg=2049.17, stdev=117.51, samples=199
lat (usec) : 500=0.01%, 750=0.04%, 1000=0.01%
lat (msec) : 2=0.02%, 4=0.08%, 10=0.26%, 20=0.44%, 50=1.50%
lat (msec) : 100=95.54%, 250=2.11%
cpu : usr=0.65%, sys=2.01%, ctx=113436, majf=0, minf=11
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,204776,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=8186KiB/s (8382kB/s), 8186KiB/s-8186KiB/s (8382kB/s-8382kB/s), io=800MiB (839MB), run=100063-100063msec
Disk stats (read/write):
vda: ios=92/210741, merge=0/18, ticks=5584/13156462, in_queue=12982736, util=96.11%
移动云数据库物理机
Starting 1 process
Jobs: 1 (f=1): [w] [100.0% done] [0K/73490K /s] [0 /17.1K iops] [eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=58333
write: io=1024.0MB, bw=53942KB/s, iops=13485 , runt= 19439msec
slat (usec): min=2 , max=107535 , avg= 5.92, stdev=210.55
clat (usec): min=957 , max=275047 , avg=9484.24, stdev=12115.21
lat (usec): min=961 , max=275053 , avg=9490.31, stdev=12116.81
clat percentiles (usec):
1.00th=[ 1400], 5.00th=[ 1944], 10.00th=[ 2512], 20.00th=[ 3632],
30.00th=[ 4704], 40.00th=[ 5792], 50.00th=[ 6816], 60.00th=[ 7904],
70.00th=[ 9024], 80.00th=[10816], 90.00th=[16512], 95.00th=[26752],
99.00th=[65280], 99.50th=[87552], 99.90th=[134144], 99.95th=[150528],
99.99th=[187392]
bw (KB/s) : min= 7741, max=89976, per=99.37%, avg=53602.26, stdev=24839.11
lat (usec) : 1000=0.01%
lat (msec) : 2=5.51%, 4=17.94%, 10=52.98%, 20=15.90%, 50=5.97%
lat (msec) : 100=1.35%, 250=0.35%, 500=0.01%
cpu : usr=2.18%, sys=14.21%, ctx=168601, majf=0, minf=26
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=1024.0MB, aggrb=53941KB/s, minb=53941KB/s, maxb=53941KB/s, mint=19439msec, maxt=19439msec
Disk stats (read/write):
ios=0/261690, merge=0/0, ticks=0/2473116, in_queue=2473587, util=99.56%, aggrios=0/261627, aggrmerge=0/526, aggrticks=0/2473565, aggrin_queue=2473564, aggrutil=99.47% :
xvda: ios=0/261627, merge=0/526, ticks=0/2473565, in_queue=2473564, util=99.47%
随机读测试
fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Read_Testing
顺序写吞吐量(写带宽)
fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Write_PPS_Testing
顺序读吞吐量(读带宽)
fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Read_PPS_Testing
随机写时延:
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=iotest -name=Rand_Write_Latency_Testing
随机读时延:
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=iotest -name=Rand_Read_Latency_Testing
3. 几点启示:
对于系统盘,由于不会有大量的读写请求,高速云盘规格,完全够用。
对于数据盘,可结合业务特征,日常运维的监控数据,有针对性的选择磁盘类型。
阿里云标称的规格,在选择I/O优化的实例情况下,基本达标。
以上是关于基于阿里云ECS 的云盘管理及性能测试的主要内容,如果未能解决你的问题,请参考以下文章