Startup a 3 node Kuberentes cluster.
Run this in a shell to see the whats happening in the cluster:
watch -t -n1 'kubectl get pods -n rook -o wide && echo && echo && kubectl get pods -o wide && echo && echo && kubectl get pvc'
Title: Building a Storage Cluster with Kubernetes | |
Modern software storage systems are inherently complex. They are composed of numerous distributed components, require | |
careful balancing of resources, and have stringent performance requirements. If you're running your applications in a | |
public cloud you're typically shielded from this complexity and can utilize managed storage services like EBS, S3 and EFS. | |
If you're running on-premise, however, your choices are quite limited and typically result in using traditional big-iron | |
storage systems. | |
In this talk we'll walkthrough how we've built a production-ready storage cluster using Kubernetes. Storage nodes run as | |
pods and enumerate the available storage devices within the cluster. We'll explore how to optimize the network through |
Taming the complexity of Ceph by running it on Kubernetes | |
Ceph is awesome! It's open, resilient, scalable, and is powering some of the world's largest OpenStack clusters. Yet despite | |
its success Ceph remains complex and requires a team of devops ninjas and storage experts to run it successfully. | |
In this talk we'll walkthrough how to simplify Ceph by running on top of Kubernetes. We'll show how to use | |
the power of Kubernetes to effectively deploy, manage, scale, and failover production-ready Ceph clusters. We introduce | |
a new "operator" for Ceph that deeply integrates with the Kubernetes API and automates the management of Ceph. |
Title: Kubernetes-based alternatives to EBS block storage in AWS | |
If you're running a Kubernetes cluster in AWS it's a no brainer to use EBS for persistent pod storage, right? If we introduce another option the answer might not be so obvious. | |
EBS has a number of issues: its slow for high IOPS workloads even with IOPS-optimized volumes, detaching and re-attaching volumes on pod restarts takes forever and does not work across availability zones, and EBS snapshots are slow and interfere with I/O traffic on the volumes. | |
In this talk we'll walkthrough how to run Kubernetes on AWS without using EBS. We will bootstrap an independent storage cluster that runs on top of Kubernetes and exposes Persistent Volumes to other pods. The storage cluster will use instance storage on EC2 instances as backing storage. It can take advantage of a new class of instances that are NVMe based for high IOPS workloads. Finally, given that you're already paying for EC2 instances there is a sunk cost argument to leveraging the instance |
curl -s -G -H 'Authorization: Bearer XXX' "https://quay.io/api/v1/organization/rook/aggregatelogs" --data "starttime=6/1/2017" | jq '.aggregated[].count' | awk '{s+=$1} END {print s}' |
## Major Themes | |
Rook v0.5 is a milestone release that improves reliability, adds support for newer versions of Kubernetes, picks up the latest stable release of Ceph (luminous), and makes a number architectural changes that pave the way to getting to Beta and adding support for other storage back-ends beyond Ceph. | |
## Attention needed | |
Rook does not yet support upgrading a cluster in place. To upgrade from 0.4 to 0.5 we recommend you tear down your cluster and install Rook 0.5 fresh. | |
We now publish the rook containers to quay.io and docker hub. Docker hub support multi-arch containers so a simple `docker pull rook/rook` will bring the right images for any of the supported architectures. We will continue to publish quay.io for continuity. |
==== building the cross container (this could take minutes the first time) | |
=== installing helm | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm/repository | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm/repository/cache | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm/repository/local | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm/plugins | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm/starters | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm/cache/archive | |
Creating /home/rook/go/src/github.com/rook/rook/.cache/helm/repository/repositories.yaml |
# This is the canonical stack definition. I believe this replaces our current `Stack` CRD. | |
# `StackInstall` and `ClusterStackInstall` remain and is where to fetch the `StackDefinition` itself. | |
# `StackDefinition` can be created directly by users without the need for a `StackIntall` or `ClusterStackInstall`? | |
apiVersion: stacks.crossplane.io/v1alpha1 | |
kind: StackDefinition | |
metadata: | |
name: wordpress-stack | |
spec: | |
# this is metadata about the stack. | |
# NOTE: this part is similar to https://github.com/kubernetes-sigs/application#application-objects |
# A composite resource definition aggregates one or more child | |
# resources into a single higher level abstraction. The abstraction | |
# is defined by a CRD and can be consumed as a standard resource. Child | |
# resources can themselves by composite enabling a hierarchy. | |
apiVersion: core.crossplane.io/v1alpha1 | |
kind: CompositeResourceDefinition | |
metadata: | |
name: private-mysql-server | |
spec: | |
# Each CompositeResourceDefinition references exactly one CRD that |