Skip to content

Instantly share code, notes, and snippets.

View stevenwilliamson's full-sized avatar

Steven Williamson stevenwilliamson

  • FreeAgent
  • Sheffield
View GitHub Profile
bash -c 'true <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF' || echo "CVE-2014-7186 vulnerable, redir_stack"
fio-2.2.12-10-gb9c8
Starting 16 processes
seq-read: Laying out IO file(s) (1 file(s) / 8192MB)
Jobs: 4 (f=4): [_(12),w(4)] [37.7% done] [0KB/81380KB/0KB /s] [0/20.4K/0 iops] [eta 05m:48s]
seq-read: (groupid=0, jobs=4): err= 0: pid=41643: Wed Dec 2 12:43:00 2015
read : io=32768MB, bw=779882KB/s, iops=194970, runt= 43025msec
slat (usec): min=0, max=659, avg= 4.40, stdev= 5.99
clat (usec): min=4, max=17736, avg=625.58, stdev=308.24
lat (usec): min=11, max=17739, avg=629.98, stdev=307.95
clat percentiles (usec):
Disk latency to device with a MySQL slave running providing backround IO.
The SSD's are sd{5,6,7,8}, sd{0,1,2,3} are Hard drives
Times are in microseconds and the aggregation is per device and operatins type READ/WRITE
For refernece the Intel datasheet http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-s3710-spec.html
states that reads should return from the device in 55 microseconds, and writes in 66 microseconds, for 4KB block sizes.
[root@phy9-sw1-gc (gc) /opt/debug]# ./diskiolat.d
pci1028,1f49 (pciex1000,5d) [LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader]], instance #0 (driver name: mr_sas)

Job Config

[global]
bs=4k
ioengine=posixaio
iodepth=32
size=8g
runtime=60

Benchmark

Using fio from https://github.com/axboe/fio

Tests are run in a zone and the file system been written to has primarycache and secondarycache set to none

zfs get primarycache,secondarycache /ssd/79b49b18-9111-4020-a0a5-2f96364b01e1
NAME                                      PROPERTY        VALUE           SOURCE
Job Config
[global]
bs=4k
ioengine=posixaio
iodepth=4
size=1g
runtime=60
group_reporting=1
numjobs=8
directory=/var/tmp
Job Configuration:
[global]
bs=4k
ioengine=solarisaio
iodepth=4
size=24g
runtime=60
fallocate=none
clocksource=clock_gettime
group_reporting=1
# Enable LZ4 on the root delegated dataset, so incoming zfs send | recv
# operations inherit it.
zfs { $delegated_root: compression => 'lz4' }
zfs { $mysql_dataset:
atime => 'off', # disable atime (it probably is for the entire pool, but lets be sure!)
compression => 'lz4', # http://don.blogs.smugmug.com/2008/10/13/zfs-mysqlinnodb-compression-update/
primarycache => 'metadata', # ZFS ARC is just doubling the InnoDB buffer pool in memory, so leave it to do it's thing.
devices => 'off',
exec => 'off',
[root@test /var/svc/log]# cat system-manifest-import\:default.log
[ Nov 5 17:41:11 Enabled. ]
[ Nov 5 17:41:11 Executing start method ("/lib/svc/method/manifest-import"). ]
[ Nov 5 17:41:11 Timeout override by svc.startd. Using infinite timeout. ]
+ activity=false
+ EMI_SERVICE=svc:/system/early-manifest-import:default
+ PROFILE_DIR_SITE=/etc/svc/profile/site
+ X=''
+ ALT_REPOSITORY=''
+ ALT_MFST_DIR=''