Skip to content

Instantly share code, notes, and snippets.

View mmgaggle's full-sized avatar

Kyle Bader mmgaggle

View GitHub Profile
cluster:
head: "cbt"
clients: ["client1","client2","client3"]
osds: ["osd1","osd2","osd3"]
mons: ["mon1","mon2","mon3"]
osds_per_node: 1
fs: xfs
mkfs_opts: -f -i size=2048 -n size=64k
mount_opts: -o inode64,noatime,logbsize=256k
conf_file: /etc/ceph/ceph.conf
diff --git a/benchmark/rbdfio.py b/benchmark/rbdfio.py
index 6b6f1e2..c2e5d8a 100644
--- a/benchmark/rbdfio.py
+++ b/benchmark/rbdfio.py
@@ -76,7 +76,7 @@ class RbdFio(Benchmark):
# populate the fio files
logger.info('Attempting to populating fio files...')
- pre_cmd = 'sudo %s --ioengine=%s --rw=write --numjobs=%s --bs=4M --size %dM %s > /dev/null' % (self.cmd_path, self.ioengine, self.numjobs, self.vol_size*0.9, self.names)
+ pre_cmd = 'sudo %s --ioengine=%s --rw=write --numjobs=%s --bs=4M --size %dM %s > /dev/null' % (self.cmd_path, self.ioengine, self.numjobs, self.vol_size*0.9/self.concurrent_procs, self.names)
#define _GNU_SOURCE
#define O_DIRECT 00040000ULL
#define O_ATOMIC 040000000ULL
#include<stdio.h>
#include <unistd.h>
#include <fcntl.h>
int main ()
{
#!/usr/local/bin/Rscript
library(zoo)
multmerge = function(mypath){
filenames=list.files(path=mypath, full.names=TRUE)
datalist = lapply(filenames, function(x){read.zoo(file=x,sep=',',header=T)})
Reduce(function(x,y) {merge(x,y)}, datalist)
}
merge = multmerge("merge/merge")
#!/usr/local/bin/Rscript
filenames <- list.files("./",pattern="output*",full.names=TRUE)
files <- lapply(filenames, function(filename){ read.csv(file=filename,sep=';',stringsAsFactors=FALSE,skip=4) })
post_ramp_subsets <- lapply(files, function(file) { subset(file,file[50]>=60000) })
results <- lapply(post_ramp_subsets, function(thread_subset) {
threads <- unique(thread_subset[,3])
thread_subsets <- lapply(threads, function(thread){ subset(thread_subset,thread_subset[,3]==thread) })
means <- lapply(thread_subsets, function(thread_subset){ mean(thread_subset[,49]) })
thread_sum <- Reduce("+",means)
#!/usr/local/bin/Rscript
printf <- function(...) invisible(print(sprintf(...)))
filenames <- list.files("./",pattern="output*",full.names=TRUE)
files <- lapply(filenames, function(filename){ read.csv(file=filename,sep=';',stringsAsFactors=FALSE,skip=4) })
post_ramp_subsets <- lapply(files, function(file) { subset(file,file[50]>=60000) })
thread_iops_means <- lapply(post_ramp_subsets, function(thread_subset) {
threads <- unique(thread_subset[,3])
thread_subsets <- lapply(threads, function(thread){ subset(thread_subset,thread_subset[,3]==thread) })
-->
--> ==== interactive mode ====
-->
--> follow the prompts to complete the interactive mode
--> if specific actions are required (e.g. just install Calamari)
--> cancel this script with Ctrl-C, and see the help menu for details
--> default values are presented in brackets
--> press Enter to accept a default value, if one is provided
--> do you want to continue?
--> this script will setup Calamari, package repo, and ceph-deploy
[client]
socket=/mnt/mysql/mysql.sock
[server]
table_open_cache = 512
thread_cache = 512
QoS for clouds
* The universal scalability law applies to you, too.
* The x value shifts to the left, when you have a failure (less capacity)
* the y value shifts up when you have a failure (recovery is effectively a client workload)
* Thus, you need to be below the point of contention to avoid service degradation upon failure.
* To capacity plan for adversarial workloads, provided throughput/iops needs to be a function
tied to the amount of provisioned storage.
* Total throughput per unit of storage should be derived from a value far enough below the point
of contention to compensate for failures.
Extend Cinder Volume Type QoS Functionality
===========================================
AWS EBS provides a deterministic number of IOPS based on the capacity of the
provisioned volume with Provisioned IOPS. Similarly, the newly announced
throughput optimized volumes provide deterministic throughput based on the
capacity of the provisioned volume. Cinder should, in addition to current per
volume maximums, be able to set lower qos limits based on the provisioned
capacity.