Homelab – Basic VM Disk IO Benchmark

I have had a storage server upgrade last few weeks, and just did some IO benchmark in my hypervisor.

I am using VMware ESXi as my hypervisor on a dedicated machine. it hooks on my Truenas Scale storage server with a 10G iSCSI connection and standard 1500 MTU (no jumble frame.)

Tests are performed with fio, and Google provides a simple guideline for VM disk IO benchmark https://cloud.google.com/compute/docs/disks/benchmarking-pd-performance

Test write throughput by performing sequential writes with multiple parallel streams (8+), using an I/O block size of 1 MB and an I/O depth of at least 64:

sudo fio --name=write_throughput --directory=$TEST_DIR --numjobs=8 \--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \--group_reporting=1
# sudo fio --name=write_throughput --directory=$TEST_DIR --numjobs=8 \--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \--group_reporting=1
write_throughput: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
...
fio-3.19
Starting 8 processes
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
write_throughput: Laying out IO file (1 file / 10240MiB)
Jobs: 7 (f=7): [W(4),_(1),W(3)][52.5%][w=64.9MiB/s][w=64 IOPS][eta 00m:58s]
write_throughput: (groupid=0, jobs=8): err= 0: pid=2502: Thu Jun 16 17:30:06 2022
  write: IOPS=224, BW=229MiB/s (240MB/s)(13.7GiB/61466msec); 0 zone resets
    slat (usec): min=47, max=1918.2k, avg=22289.12, stdev=138620.30
    clat (msec): min=19, max=49855, avg=1843.07, stdev=5712.01
     lat (msec): min=176, max=50052, avg=1865.65, stdev=5811.18
    clat percentiles (msec):
     |  1.00th=[  180],  5.00th=[  184], 10.00th=[  300], 20.00th=[  735],
     | 30.00th=[  776], 40.00th=[  793], 50.00th=[  827], 60.00th=[  860],
     | 70.00th=[  919], 80.00th=[ 1301], 90.00th=[ 1569], 95.00th=[ 2072],
     | 99.00th=[17113], 99.50th=[17113], 99.90th=[17113], 99.95th=[17113],
     | 99.99th=[17113]
   bw (  KiB/s): min=28617, max=1160657, per=100.00%, avg=252403.65, stdev=41662.93, samples=472
   iops        : min=   21, max= 1129, avg=240.85, stdev=40.74, samples=472
  lat (msec)   : 20=0.01%, 100=0.01%, 250=9.83%, 500=0.87%, 750=13.68%
  lat (msec)   : 1000=53.79%, 2000=18.23%, >=2000=5.37%
  cpu          : usr=0.20%, sys=0.38%, ctx=13534, majf=0, minf=467
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.1%, 16=0.6%, 32=1.2%, >=64=98.1%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,13827,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=13.7GiB (14.8GB), run=61466-61466msec

Disk stats (read/write):
    dm-0: ios=0/16134, merge=0/0, ticks=0/12396824, in_queue=12396824, util=99.90%, aggrios=0/16134, aggrmerge=0/1, aggrticks=0/12424846, aggrin_queue=12424845, aggrutil=99.87%
  sda: ios=0/16134, merge=0/1, ticks=0/12424846, in_queue=12424845, util=99.87%
[[email protected] Data]#

Test write IOPS by performing random writes, using an I/O block size of 4 KB and an I/O depth of at least 64:

sudo fio --name=write_iops --directory=$TEST_DIR --size=10G \--time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \--verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1
[[email protected] ~]# clear
[[email protected] ~]# sudo fio --name=write_iops --directory=$TEST_DIR --size=10G \
> --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
> --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1
write_iops: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.19
Starting 1 process
fio: io_u error on file /Data/fiotest/write_iops.0.0: No space left on device: write offset=303255552, buflen=4096
fio: pid=2596, err=28/file:io_u.c:1803, func=io_u error, error=No space left on device

write_iops: (groupid=0, jobs=1): err=28 (file:io_u.c:1803, func=io_u error, error=No space left on device): pid=2596: Thu Jun 16 17:32:46 2022
  write: IOPS=60.0k, BW=238MiB/s (250MB/s)(5709MiB/23972msec); 0 zone resets
    slat (usec): min=3, max=36791, avg= 8.45, stdev=39.28
    clat (usec): min=184, max=177493, avg=1040.16, stdev=2205.25
     lat (usec): min=198, max=177499, avg=1048.82, stdev=2206.88
    clat percentiles (usec):
     |  1.00th=[   635],  5.00th=[   717], 10.00th=[   750], 20.00th=[   799],
     | 30.00th=[   832], 40.00th=[   857], 50.00th=[   889], 60.00th=[   922],
     | 70.00th=[   963], 80.00th=[  1020], 90.00th=[  1139], 95.00th=[  1647],
     | 99.00th=[  3294], 99.50th=[  4686], 99.90th=[  9765], 99.95th=[ 15008],
     | 99.99th=[160433]
   bw (  KiB/s): min=82464, max=294088, per=100.00%, avg=245998.02, stdev=67413.87, samples=47
   iops        : min=20616, max=73522, avg=61499.38, stdev=16853.44, samples=47
  lat (usec)   : 250=0.01%, 500=0.01%, 750=9.57%, 1000=67.11%
  lat (msec)   : 2=18.91%, 4=3.75%, 10=0.55%, 20=0.05%, 50=0.02%
  lat (msec)   : 100=0.01%, 250=0.02%
  cpu          : usr=14.15%, sys=45.55%, ctx=156240, majf=0, minf=68
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1461581,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=238MiB/s (250MB/s), 238MiB/s-238MiB/s (250MB/s-250MB/s), io=5709MiB (5987MB), run=23972-23972msec

Disk stats (read/write):
    dm-0: ios=0/1592185, merge=0/0, ticks=0/1429699, in_queue=1429699, util=99.56%, aggrios=0/1591823, aggrmerge=0/362, aggrticks=0/1422162, aggrin_queue=1422162, aggrutil=99.56%
  sda: ios=0/1591823, merge=0/362, ticks=0/1422162, in_queue=1422162, util=99.56%
[[email protected] ~]#

Test read throughput by performing sequential reads with multiple parallel streams (8+), using an I/O block size of 1 MB and an I/O depth of at least 64:

sudo fio --name=read_throughput --directory=$TEST_DIR --numjobs=8 \
--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read \
--group_reporting=1
[[email protected] ~]# sudo fio --name=read_throughput --directory=$TEST_DIR --numjobs=8 \
> --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
> --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read \
> --group_reporting=1
read_throughput: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
...
fio-3.19
Starting 8 processes
Jobs: 2 (f=2): [_(1),R(2),_(5)][100.0%][r=14.0GiB/s][r=15.3k IOPS][eta 00m:00s]
read_throughput: (groupid=0, jobs=8): err= 0: pid=1881: Thu Jun 16 17:48:29 2022
  read: IOPS=21.4k, BW=20.9GiB/s (22.4GB/s)(1259GiB/60292msec)
    slat (usec): min=13, max=76074, avg=197.95, stdev=422.19
    clat (nsec): min=1404, max=693868k, avg=23722148.51, stdev=59989033.56
     lat (usec): min=86, max=693925, avg=23920.38, stdev=60025.44
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   12], 10.00th=[   12], 20.00th=[   12],
     | 30.00th=[   12], 40.00th=[   13], 50.00th=[   13], 60.00th=[   13],
     | 70.00th=[   13], 80.00th=[   13], 90.00th=[   13], 95.00th=[   14],
     | 99.00th=[  342], 99.50th=[  347], 99.90th=[  384], 99.95th=[  414],
     | 99.99th=[  456]
   bw (  MiB/s): min=18584, max=26532, per=100.00%, avg=21483.09, stdev=176.33, samples=954
   iops        : min=18579, max=26529, avg=21478.41, stdev=176.35, samples=954
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 100=0.01%, 250=0.01%
  lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.07%, 20=96.31%, 50=0.02%
  lat (msec)   : 250=0.01%, 500=3.57%, 750=0.01%
  cpu          : usr=0.82%, sys=49.54%, ctx=41872, majf=0, minf=466
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1288207,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=20.9GiB/s (22.4GB/s), 20.9GiB/s-20.9GiB/s (22.4GB/s-22.4GB/s), io=1259GiB (1351GB), run=60292-60292msec

Disk stats (read/write):
    dm-0: ios=47642/87, merge=0/0, ticks=15825386/36345, in_queue=15861731, util=100.00%, aggrios=47652/75, aggrmerge=1189/12, aggrticks=15817689/23298, aggrin_queue=15840987, aggrutil=100.00%
  sda: ios=47652/75, merge=1189/12, ticks=15817689/23298, in_queue=15840987, util=100.00%
[[email protected] ~]#

Test read IOPS by performing random reads, using an I/O block size of 4 KB and an I/O depth of at least 64:

sudo fio --name=read_iops --directory=$TEST_DIR --size=10G \
--time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
--verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1
[[email protected] ~]# sudo fio --name=read_iops --directory=$TEST_DIR --size=10G \
> --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
> --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1
read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.19
Starting 1 process
read_iops: Laying out IO file (1 file / 10240MiB)
fio: ENOSPC on laying out file, stopping
Jobs: 1 (f=1): [r(1)][100.0%][r=1061MiB/s][r=272k IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=1913: Thu Jun 16 17:50:33 2022
  read: IOPS=271k, BW=1057MiB/s (1108MB/s)(61.9GiB/60001msec)
    slat (nsec): min=1569, max=697200, avg=2087.86, stdev=1413.17
    clat (usec): min=2, max=11769, avg=233.84, stdev=26.11
     lat (usec): min=5, max=11886, avg=236.04, stdev=26.43
    clat percentiles (usec):
     |  1.00th=[  182],  5.00th=[  204], 10.00th=[  217], 20.00th=[  225],
     | 30.00th=[  225], 40.00th=[  227], 50.00th=[  229], 60.00th=[  233],
     | 70.00th=[  237], 80.00th=[  243], 90.00th=[  260], 95.00th=[  273],
     | 99.00th=[  330], 99.50th=[  359], 99.90th=[  441], 99.95th=[  562],
     | 99.99th=[  816]
   bw (  MiB/s): min=  946, max= 1085, per=100.00%, avg=1058.05, stdev=22.12, samples=119
   iops        : min=242382, max=277826, avg=270859.52, stdev=5662.33, samples=119
  lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%
  lat (usec)   : 250=85.50%, 500=14.43%, 750=0.05%, 1000=0.02%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=48.33%, sys=51.25%, ctx=281, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=16234920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1057MiB/s (1108MB/s), 1057MiB/s-1057MiB/s (1108MB/s-1108MB/s), io=61.9GiB (66.5GB), run=60001-60001msec

Disk stats (read/write):
    dm-0: ios=2831/16, merge=0/0, ticks=1340/15, in_queue=1355, util=6.47%, aggrios=2833/10, aggrmerge=0/6, aggrticks=1338/9, aggrin_queue=1346, aggrutil=6.45%
  sda: ios=2833/10, merge=0/6, ticks=1338/9, in_queue=1346, util=6.45%
[[email protected] ~]#

Homelab – Backplane for SUPERMICRO SC846 Chassis, the Buying Guide

Title Photo by Magnus Engø on Unsplash

Supermicro SC846 is a great chassis for homelab users, which offering 24 3.5inch hard drive bay with varieties of backplanes. Most users buy second-hand chassis and want to change backplane because it doesn’t fit their needs. I list all backplanes for SC846 until April 2020 and also my recommendations for the different use cases.