Ceph文件系统FS性能测试
测试背景
系统环境:
测试工具:fio测试工具
工具版本:fio-2.2.8
测试目录:/data/mycephfs
磁盘:单块盘做的Raid0,ext4文件系统
网络:3块千兆网卡绑定在一起
Ceph环境:
Ceph版本
双副本机制,ceph集群共两台机器,每台机器上有四个osd,每个osd对应一块物理盘:
# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 28.34558 root default
-2 14.17279 host bdc217
0 3.54320 osd.0 up 1.00000 1.00000
1 3.54320 osd.1 up 1.00000 1.00000
2 3.54320 osd.2 up 1.00000 1.00000
3 3.54320 osd.3 up 1.00000 1.00000
-3 14.17279 host bdc218
4 3.54320 osd.4 up 1.00000 1.00000
5 3.54320 osd.5 up 1.00000 1.00000
6 3.54320 osd.6 up 1.00000 1.00000
7 3.54320 osd.7 up 1.00000 1.00000
CephFS性能测试
随机读测试
# fio -filename=/data/mycephfs/dlw1 -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=10G -numjobs=10 -runtime=1000 -group_reporting -name=mytest
mytest: (g=0): rw=randread, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.8
Starting 10 threads
mytest: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [_(8),r(1),_(1)] [100.0% done] [184.5MB/0KB/0KB /s] [11.8K/0/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=10): err= 0: pid=66317: Mon Apr 25 10:10:17 2016
read : io=102400MB, bw=219746KB/s, iops=13734, runt=477176msec
clat (usec): min=235, max=6375, avg=724.61, stdev=379.12
lat (usec): min=235, max=6375, avg=724.86, stdev=379.12
clat percentiles (usec):
| 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 346], 20.00th=[ 370],
| 30.00th=[ 394], 40.00th=[ 430], 50.00th=[ 588], 60.00th=[ 844],
| 70.00th=[ 980], 80.00th=[ 1096], 90.00th=[ 1256], 95.00th=[ 1384],
| 99.00th=[ 1640], 99.50th=[ 1736], 99.90th=[ 1928], 99.95th=[ 2008],
| 99.99th=[ 2160]
bw (KB /s): min=18528, max=26176, per=10.01%, avg=21998.59, stdev=518.40
lat (usec) : 250=0.01%, 500=46.85%, 750=8.01%, 1000=16.67%
lat (msec) : 2=28.42%, 4=0.05%, 10=0.01%
cpu : usr=0.69%, sys=2.72%, ctx=6650534, majf=0, minf=3005
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=6553600/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=102400MB, aggrb=219746KB/s, minb=219746KB/s, maxb=219746KB/s, mint=477176msec, maxt=477176msec
顺序读测试
# fio -filename=/data/mycephfs/dlw2 -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=10G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
mytest: (g=0): rw=read, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.8
Starting 30 threads
mytest: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 30 (f=30): [R(30)] [100.0% done] [163.3MB/0KB/0KB /s] [10.5K/0/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=66442: Mon Apr 25 10:35:08 2016
read : io=223062MB, bw=228390KB/s, iops=14274, runt=1000112msec
clat (usec): min=208, max=218148, avg=2098.94, stdev=2482.28
lat (usec): min=208, max=218149, avg=2099.20, stdev=2482.30
clat percentiles (usec):
| 1.00th=[ 245], 5.00th=[ 266], 10.00th=[ 278], 20.00th=[ 306],
| 30.00th=[ 342], 40.00th=[ 422], 50.00th=[ 1004], 60.00th=[ 2024],
| 70.00th=[ 2992], 80.00th=[ 4128], 90.00th=[ 5408], 95.00th=[ 6304],
| 99.00th=[ 8096], 99.50th=[ 8896], 99.90th=[10560], 99.95th=[11456],
| 99.99th=[13760]
bw (KB /s): min= 1431, max=56864, per=3.34%, avg=7627.18, stdev=7255.51
lat (usec) : 250=1.64%, 500=42.41%, 750=4.67%, 1000=1.28%
lat (msec) : 2=9.75%, 4=18.98%, 10=21.10%, 20=0.17%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%
cpu : usr=0.20%, sys=0.93%, ctx=15649626, majf=0, minf=4342
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=14275945/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=223062MB, aggrb=228389KB/s, minb=228389KB/s, maxb=228389KB/s, mint=1000112msec, maxt=1000112msec
随机写测试
# fio -filename=/data/mycephfs/dlw3 -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=4k -size=10G -numjobs=30 -runtime=1000 -group_reporting -name=mytest_4k_10G_randwrite
mytest_4k_10G_randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
...
fio-2.2.8
Starting 30 threads
mytest_4k_10G_randwrite: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 30 (f=30): [w(30)] [100.0% done] [0KB/1552KB/0KB /s] [0/388/0 iops] [eta 00m:00s]
mytest_4k_10G_randwrite: (groupid=0, jobs=30): err= 0: pid=66723: Mon Apr 25 10:57:34 2016
write: io=2893.7MB, bw=2962.3KB/s, iops=740, runt=1000299msec
clat (msec): min=1, max=2890, avg=40.50, stdev=137.62
lat (msec): min=1, max=2890, avg=40.50, stdev=137.62
clat percentiles (usec):
| 1.00th=[ 1608], 5.00th=[ 1800], 10.00th=[ 1960], 20.00th=[ 2288],
| 30.00th=[ 2608], 40.00th=[ 2960], 50.00th=[ 3408], 60.00th=[ 4320],
| 70.00th=[ 9408], 80.00th=[26240], 90.00th=[73216], 95.00th=[211968],
| 99.00th=[659456], 99.50th=[856064], 99.90th=[1712128], 99.95th=[1892352],
| 99.99th=[2179072]
bw (KB /s): min= 1, max= 807, per=4.18%, avg=123.91, stdev=153.04
lat (msec) : 2=11.20%, 4=46.39%, 10=13.04%, 20=6.62%, 50=9.44%
lat (msec) : 100=5.35%, 250=3.41%, 500=2.71%, 750=1.15%, 1000=0.31%
lat (msec) : 2000=0.35%, >=2000=0.03%
cpu : usr=0.02%, sys=0.08%, ctx=742505, majf=0, minf=5223
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=740780/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=2893.7MB, aggrb=2962KB/s, minb=2962KB/s, maxb=2962KB/s, mint=1000299msec, maxt=1000299msec
顺序写测试
# fio -filename=/data/mycephfs/dlw4 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=10G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
mytest: (g=0): rw=write, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.8
Starting 30 threads
mytest: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 30 (f=30): [W(30)] [100.0% done] [0KB/62576KB/0KB /s] [0/3911/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=67042: Mon Apr 25 11:18:34 2016
write: io=74165MB, bw=75942KB/s, iops=4746, runt=1000042msec
clat (msec): min=1, max=798, avg= 6.32, stdev= 9.17
lat (msec): min=1, max=798, avg= 6.32, stdev= 9.17
clat percentiles (usec):
| 1.00th=[ 1624], 5.00th=[ 1896], 10.00th=[ 2096], 20.00th=[ 2384],
| 30.00th=[ 2672], 40.00th=[ 3024], 50.00th=[ 3504], 60.00th=[ 4128],
| 70.00th=[ 5152], 80.00th=[ 7136], 90.00th=[13376], 95.00th=[20864],
| 99.00th=[44288], 99.50th=[55552], 99.90th=[85504], 99.95th=[100864],
| 99.99th=[154624]
bw (KB /s): min= 20, max= 8416, per=3.34%, avg=2539.57, stdev=902.77
lat (msec) : 2=7.59%, 4=50.38%, 10=27.64%, 20=9.01%, 50=4.67%
lat (msec) : 100=0.65%, 250=0.05%, 500=0.01%, 750=0.01%, 1000=0.01%
cpu : usr=0.09%, sys=0.40%, ctx=4749694, majf=0, minf=4729
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=4746566/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=74165MB, aggrb=75941KB/s, minb=75941KB/s, maxb=75941KB/s, mint=1000042msec, maxt=1000042msec
混合随机读写
# fio -filename=/data/mycephfs/dlw5 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=10G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop
mytest: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.8
Starting 30 threads
mytest: Laying out IO file(s) (1 file(s) / 10240MB)
fio: os or kernel doesn't support IO scheduler switching
fio: os or kernel doesn't support IO scheduler switching
Jobs: 16 (f=16): [m(2),_(2),m(2),_(1),m(2),_(1),m(1),_(1),m(2),_(1),m(1),_(4),m(5),_(2),m(1),_(2)] [6.7% done] [31056KB/14304KB/0KB /s] [1941/894/0 iops] [eta 23m:19s]
mytest: (groupid=0, jobs=30): err= 0: pid=67335: Mon Apr 25 11:34:37 2016
read : io=2307.8MB, bw=23428KB/s, iops=1464, runt=100865msec
clat (usec): min=266, max=1656.5K, avg=2762.94, stdev=30549.38
lat (usec): min=266, max=1656.5K, avg=2763.28, stdev=30549.39
clat percentiles (usec):
| 1.00th=[ 338], 5.00th=[ 374], 10.00th=[ 406], 20.00th=[ 466],
| 30.00th=[ 540], 40.00th=[ 644], 50.00th=[ 748], 60.00th=[ 884],
| 70.00th=[ 1128], 80.00th=[ 1512], 90.00th=[ 2256], 95.00th=[ 3024],
| 99.00th=[19584], 99.50th=[65280], 99.90th=[477184], 99.95th=[741376],
| 99.99th=[1302528]
bw (KB /s): min= 10, max= 6176, per=4.30%, avg=1008.04, stdev=1317.29
write: io=981.72MB, bw=9966.5KB/s, iops=622, runt=100865msec
clat (msec): min=1, max=1649, avg=41.33, stdev=130.05
lat (msec): min=1, max=1649, avg=41.33, stdev=130.05
clat percentiles (usec):
| 1.00th=[ 1896], 5.00th=[ 2160], 10.00th=[ 2352], 20.00th=[ 2704],
| 30.00th=[ 3088], 40.00th=[ 3568], 50.00th=[ 4128], 60.00th=[ 4960],
| 70.00th=[ 6368], 80.00th=[12352], 90.00th=[50944], 95.00th=[354304],
| 99.00th=[692224], 99.50th=[765952], 99.90th=[995328], 99.95th=[1138688],
| 99.99th=[1515520]
bw (KB /s): min= 10, max= 2528, per=4.14%, avg=412.64, stdev=547.63
lat (usec) : 500=17.55%, 750=17.53%, 1000=10.90%
lat (msec) : 2=15.95%, 4=20.77%, 10=9.57%, 20=2.53%, 50=1.77%
lat (msec) : 100=0.75%, 250=0.68%, 500=1.18%, 750=0.60%, 1000=0.18%
lat (msec) : 2000=0.04%
cpu : usr=0.05%, sys=0.19%, ctx=211037, majf=0, minf=1
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=147692/w=62829/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
同步i/o(顺序写)测试
# fio -filename=/data/mycephfs/dlw6 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=4k -size=50G -numjobs=10 -runtime=1000 -group_reporting -name=mytest
mytest: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
...
fio-2.2.8
Starting 10 threads
mytest: Laying out IO file(s) (1 file(s) / 51200MB)
Jobs: 10 (f=10): [W(10)] [100.0% done] [0KB/15332KB/0KB /s] [0/3833/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=10): err= 0: pid=67467: Mon Apr 25 11:58:12 2016
write: io=14740MB, bw=15094KB/s, iops=3773, runt=1000017msec
clat (msec): min=1, max=218, avg= 2.65, stdev= 3.55
lat (msec): min=1, max=218, avg= 2.65, stdev= 3.55
clat percentiles (usec):
| 1.00th=[ 1320], 5.00th=[ 1416], 10.00th=[ 1480], 20.00th=[ 1576],
| 30.00th=[ 1688], 40.00th=[ 1816], 50.00th=[ 1976], 60.00th=[ 2160],
| 70.00th=[ 2352], 80.00th=[ 2640], 90.00th=[ 3216], 95.00th=[ 4448],
| 99.00th=[18048], 99.50th=[23936], 99.90th=[40192], 99.95th=[50432],
| 99.99th=[112128]
bw (KB /s): min= 535, max= 2896, per=10.01%, avg=1511.18, stdev=398.66
lat (msec) : 2=51.22%, 4=42.78%, 10=3.18%, 20=2.06%, 50=0.71%
lat (msec) : 100=0.04%, 250=0.02%
cpu : usr=0.19%, sys=0.86%, ctx=3786366, majf=0, minf=792
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=3773454/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=14740MB, aggrb=15093KB/s, minb=15093KB/s, maxb=15093KB/s, mint=1000017msec, maxt=1000017msec
异步i/o(顺序写)测试
# fio -filename=/data/mycephfs/dlw7 -direct=1 -iodepth 1 -thread -rw=write -ioengine=libaio -bs=4k -size=50G -numjobs=10 -runtime=1000 -group_reporting -name=mytest
mytest: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.2.8
Starting 10 threads
mytest: Laying out IO file(s) (1 file(s) / 51200MB)
Jobs: 10 (f=10): [W(10)] [100.0% done] [0KB/16056KB/0KB /s] [0/4014/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=10): err= 0: pid=67683: Mon Apr 25 12:15:42 2016
write: io=14079MB, bw=14417KB/s, iops=3604, runt=1000002msec
slat (msec): min=1, max=257, avg= 2.77, stdev= 3.63
clat (usec): min=0, max=127, avg= 1.61, stdev= 0.72
lat (msec): min=1, max=257, avg= 2.77, stdev= 3.63
clat percentiles (usec):
| 1.00th=[ 1], 5.00th=[ 1], 10.00th=[ 1], 20.00th=[ 1],
| 30.00th=[ 1], 40.00th=[ 1], 50.00th=[ 2], 60.00th=[ 2],
| 70.00th=[ 2], 80.00th=[ 2], 90.00th=[ 2], 95.00th=[ 2],
| 99.00th=[ 3], 99.50th=[ 3], 99.90th=[ 11], 99.95th=[ 12],
| 99.99th=[ 16]
bw (KB /s): min= 508, max= 2952, per=10.01%, avg=1443.68, stdev=420.95
lat (usec) : 2=42.67%, 4=56.91%, 10=0.27%, 20=0.15%, 50=0.01%
lat (usec) : 100=0.01%, 250=0.01%
cpu : usr=0.22%, sys=0.88%, ctx=3627422, majf=0, minf=919
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=3604209/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=14079MB, aggrb=14416KB/s, minb=14416KB/s, maxb=14416KB/s, mint=1000002msec, maxt=1000002msec
磁盘性能测试
为了对比Ceph文件的性能,此处做了一个单块磁盘的性能测试,为了确保测试的真实性,单块磁盘就选择为一个OSD对应的磁盘。
随机读测试
# fio -filename=/var/lib/ceph/osd/ceph-4/disktest/dlw1 -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=10G -numjobs=10 -runtime=1000 -group_reporting -name=mytest
read : io=5792.4MB, bw=5931.6KB/s, iops=370, runt=1000043msec
顺序读测试
# fio -filename=/var/lib/ceph/osd/ceph-4/disktest/dlw2 -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=10G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
read : io=29320MB, bw=30021KB/s, iops=1876, runt=1000079msec
随机写测试
# fio -filename=/var/lib/ceph/osd/ceph-4/disktest/dlw3 -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=4k -size=10G -numjobs=30 -runtime=1000 -group_reporting -name=mytest_4k_10G_randwrite
write: io=1346.2MB, bw=1378.4KB/s, iops=344, runt=1000090msec
顺序写测试
# fio -filename=/var/lib/ceph/osd/ceph-4/disktest/dlw4 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=10G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
write: io=16604MB, bw=17002KB/s, iops=1062, runt=1000012msec
混合随机读写
# fio -filename=/var/lib/ceph/osd/ceph-4/disktest/dlw5 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=10G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop
read : io=520432KB, bw=5200.5KB/s, iops=325, runt=100074msec
write: io=209632KB, bw=2094.8KB/s, iops=130, runt=100074msec
同步i/o(顺序写)测试
# fio -filename=/var/lib/ceph/osd/ceph-4/disktest/dlw6 -direct=1 -iodepth 1 -thread -rw=write
-ioengine=psync -bs=4k -size=50G -numjobs=10 -runtime=1000 -group_reporting -name=mytest
write: io=5194.7MB, bw=5319.2KB/s, iops=1329, runt=1000050msec
异步i/o(顺序写)测试
# fio -filename=/var/lib/ceph/osd/ceph-4/disktest/dlw7 -direct=1 -iodepth 1 -thread -rw=write -ioengine=libaio -bs=4k -size=50G -numjobs=10 -runtime=1000 -group_reporting -name=mytest
write: io=6137.1MB, bw=6285.7KB/s, iops=1571, runt=1000029msec
测试参数说明
参数 |
参数说明 |
filename=/data/mycephfs/dlw1 |
测试文件名称,通常选择需要测试的目录。 |
direct=1 |
测试过程绕过机器自带的buffer。使测试结果更真实。 |
rw=randwrite |
测试随机写的I/O |
rw=randrw |
测试随机写和读的I/O |
bs=16k |
单次io的块文件大小为16k |
bsrange=512-2048 |
同上,提定数据块的大小范围 |
size=10G |
本次的测试文件大小为10G,以每次4k的io进行测试。 |
numjobs=30 |
本次的测试线程为30. |
runtime=1000 |
测试时间为1000秒,如果不写则一直将设定size为10g文件分4k每次写完为止。 |
ioengine=psync |
io引擎使用pync方式 |
rwmixwrite=70 |
在混合读写的模式下,写占70% |
group_reporting |
关于显示结果的,汇总每个进程的信息。 |
总结
参数说明:
io 执行了多少M的IO
bw 平均IO带宽
runt 线程运行时间
iops 每秒的输入输出量(或读写次数)
CephFs性能汇总:
随机读 |
顺序读 |
随机写 |
顺序写 |
混合随机读 |
混合随机写 |
同步i/o(顺序写) |
异步i/o (顺序写) |
|
Io(MB) |
102400 |
223062 |
2893.7 |
74165 |
2307.8 |
981.72 |
14740 |
14079 |
Bw(KB/s) |
219746 |
228390 |
2962.3 |
75942 |
23428 |
9966.5 |
15094 |
14417 |
Runt(s) |
477 |
1000 |
1000 |
1000 |
100 |
100 |
1000 |
1000 |
iops |
13734 |
14274 |
740 |
4746 |
1464 |
622 |
3773 |
3604 |
普通磁盘性能汇总:
随机读 |
顺序读 |
随机写 |
顺序写 |
混合随机读 |
混合随机写 |
同步i/o(顺序写) |
异步i/o (顺序写) |
|
Io(MB) |
5792.4 |
29320 |
1346.2 |
16604 |
508 |
204 |
5194.7 |
6137.1 |
Bw(KB/s) |
5931.6 |
30021 |
1378.4 |
17002 |
5200.5 |
2094.8 |
5319.2 |
6285.7 |
Runt(s) |
1000 |
1000 |
1000 |
1000 |
1000 |
1000 |
1000 |
1000 |
iops |
370 |
1876 |
344 |
1062 |
325 |
130 |
1329 |
1571 |
由以上两张表格对比可以看出,CephFS的随机读可以在477s就能完成,而正常的磁盘1000s内都无法完成,并且前者的iops是后者的37倍多;在顺序读上,二者均未能在1000s内完成,但是Ceph的iops是正常磁盘的7.6倍多;随机写二者的iops都不是很高,但CephFS的是普通磁盘的2倍多;在顺序写方面CephFS的iops也是普通磁盘的4.7倍左右。
来源:oschina
链接:https://my.oschina.net/u/2731030/blog/665980