DigitalOcean's disk performance got order of magnitude worse. Compare the following test result with the ones that I did last year, when they started to support Virtio:
- DigitalOcean 512 (February 2013): https://gist.github.com/kenn/4741999
- DigitalOcean 512 with Virtio (February 2013): https://gist.github.com/kenn/4742470
Probably DO started to throttle the I/O on the cheaper droplets, but the result is poor overall.
# hdparm -tf /dev/disk/by-label/DOROOT
/dev/disk/by-label/DOROOT:
Timing buffered disk reads: 132 MB in 3.00 seconds = 44.00 MB/sec
Timing buffered disk reads: 634 MB in 3.01 seconds = 210.90 MB/sec
Timing buffered disk reads: 904 MB in 3.02 seconds = 299.42 MB/sec
Timing buffered disk reads: 1148 MB in 3.00 seconds = 382.39 MB/sec
Timing buffered disk reads: 1000 MB in 3.00 seconds = 332.94 MB/sec
# bonnie++ -b -u root
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
eme-staging-test 1G 711 96 5753 1 22303 3 3138 98 726778 33 11446 164
Latency 32335us 136s 4698ms 16416us 45819us 435ms
Version 1.96 ------Sequential Create------ --------Random Create--------
eme-staging-test -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 808 4 +++++ +++ 1055 6 1192 6 +++++ +++ 811 3
Latency 2100ms 627us 901ms 622ms 31us 2970ms
1.96,1.96,eme-staging-test,1,1400509579,1G,,711,96,5753,1,22303,3,3138,98,726778,33,11446,164,16,,,,,808,4,+++++,+++,1055,6,1192,6,+++++,+++,811,3,32335us,136s,4698ms,16416us,45819us,435ms,2100ms,627us,901ms,622ms,31us,2970ms
Compare it with Linode, who just introduced SSD-based instances. Linode is a lot faster, even faster than the original DO results.
# hdparm -tf /dev/root
/dev/root:
Timing buffered disk reads: 3042 MB in 3.00 seconds = 1013.74 MB/sec
Timing buffered disk reads: 2960 MB in 3.00 seconds = 986.54 MB/sec
Timing buffered disk reads: 2946 MB in 3.00 seconds = 981.88 MB/sec
Timing buffered disk reads: 3028 MB in 3.00 seconds = 1008.84 MB/sec
Timing buffered disk reads: 2942 MB in 3.00 seconds = 980.42 MB/sec
# bonnie++ -b -u root
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
li632-240 4G 375 99 709327 98 409634 64 1006 98 1026989 83 9222 109
Latency 25129us 2475us 26406us 34816us 3551us 92975us
Version 1.96 ------Sequential Create------ --------Random Create--------
li632-240 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 4184 19 +++++ +++ 3494 18 4302 19 +++++ +++ 3513 18
Latency 2013us 303us 2027us 1219us 77us 1337us
1.96,1.96,li632-240,1,1400517486,4G,,375,99,709327,98,409634,64,1006,98,1026989,83,9222,109,16,,,,,4184,19,+++++,+++,3494,18,4302,19,+++++,+++,3513,18,25129us,2475us,26406us,34816us,3551us,92975us,2013us,303us,2027us,1219us,77us,1337us
Also, a test from last year ago when Linode still ran on hard drives:
- Linode performance (March 2013): https://gist.github.com/kenn/5191853
In a public cloud environment comparing disk performance is a bit complicated because there are many factors at play. Sometimes it could be a noisy neighbor issue which does require providers to instate some sort of fair-share policy to ensure that noisy neighbors are weighted down appropriately based on the size of the plan that they purchased.
Also please keep in mind that SSD drives and clouds that operate on them are relatively new. We were one of the first to go with an all SSD cloud, as such initial performance is always going to be great simply because each new customer is usually being put on a brand new server simply because the cloud is so new, however over time as customers begin to spread out across the cloud and they move from temporary loads such as testing and development to more permanent and production loads the throughput stabilizes at a certain threshold.
Like most providers we have a fair share policy in place but we don't throttle disk performance if throughput is available so that customers can spike in their utilization when needed if the resources are free. If you feel that your disk performance is below where it should be please open up a support ticket so that our customer support staff can take a look at the hypervisor in question and see if any changes need to be done.
Thanks,
Moisey
Cofounder DigitalOcean