= Performance testing =
Kubernetes scaling SLO * 99% apiserver response under 1s * 99% pods scheduled under 5s (prepull images)
== Process performance ==
Containers vs VMs vx process on host
no CPU overhead KVM use few % when idle
== Storage performance ==
graphdriver Docker storage drivers have different penalties
Default driver (loop-lvm) is not suitable for production use, it is terrible http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
https://community.hortonworks.com/articles/87949/docker-storage-drivers-overview.html
production ready is: aufs, direct-lvm
https://docs.docker.com/engine/userguide/storagedriver/selectadriver/#future-proofing
== Scheduling performance ==
Kubemark https://github.com/kubernetes/community/blob/master/contributors/design-proposals/kubemark.md
CoreOS - Improving K8s scheduler performance https://coreos.com/blog/improving-kubernetes-scheduler-performance.html
=== Etcd tuning ===
Heartbeat interval max(avg(RTT)) Election timeout 5..10*Heartbeat
Periodicas snapshots: etcd append all key changes to log by default every 10k changes
etcd over SSL?
== Network performance ==
depends on driver/CNI default ~10us redirection latency