Let's look at an example of how to launch a Kubernetes cluster from scratch on DigitalOcean, including kubeadm, an Nginx Ingress controller, and Letsencrypt certificates.
We'll be creating a four-node cluster (k8s-master, k8s-000...k8s-002), load balancer, and ssl certificates.
- Install Kubernetes
- Initialize Cluster
- Install CNI
- Create a Simple Service
- Nginx Ingress
- Load Balancer
- Install Helm
- Install Cert-Manager
- Letsencrypt SSL
We're going to install Kubernetes on to four CoreOS servers.
First create four CoreOS-stable droplets, all in the same region and with your ssh-key.
On each of the servers, login over ssh and install the following software.
CoreOS is setup with core
as the primary user and when the droplet was created your ssh key was added to it so login with ssh core@IP_ADDRESS
.
Most of these commands require sudo so start by accessing root privileges with sudo su
.
First things first, startup that Docker daemon.
systemctl enable docker && systemctl start docker
Kubernetes requires a container networking interface to be installed, most of which require this CNI plugin.
CNI_VERSION="v0.6.0"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
Download the kubeadm, kubelet, and kubectl official-release binaries.
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
mkdir -p /opt/bin
cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}
Download the SystemD service files.
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Kubelet is the primary Kubernetes service. Start and enable it.
systemctl enable kubelet && systemctl start kubelet
Kubeadm is a newer tool that initializes a Kubernetes cluster following best practices. Kubeadm is first ran on the master which produces another command to run on each additional node.
Use kubeadm
to initialize a cluster on the private network, including an address range to use for the pod network (created with CNI).
priv_ip=$(ip -f inet -o addr show eth1|cut -d\ -f 7 | cut -d/ -f 1 | head -n 1)
/opt/bin/kubeadm init --apiserver-advertise-address=$priv_ip --pod-network-cidr=192.168.0.0/16
There will be a kubeadm command printed in the output. Copy and paste it into the nodes you want to join the cluster.
Run the kubeadm command from the output above to join the cluster.
ssh core@IP_ADDRESS
sudo /opt/bin/kubeadm ...
The /etc/kubernetes/admin.conf
on the master file contains all of the information needed to access the cluster.
Copy the admin.conf
file to the ~/.kube/config
(where kubectl expects it to be).
As the core
user:
mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
This file can also be used on other computers to control the cluster. On your laptop, install kubectl and copy this config file to administer the cluster.
scp core@IP_ADDRESS:/etc/kubernetes/admin.conf .kube/config
Kubernetes does not have a Container Network installed by default, so you'll need to install one. There are many options and here's how I'm currently installing Calico.
kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
Next we'll create a simple http service.
The example-com-controller will create and manage the example-com-pod's.
kind: Deployment
apiVersion: apps/v1
metadata:
name: example-com-controller
labels:
app: example-com
spec:
replicas: 1
matchSelector:
labels:
app: example-com
template:
metadata:
name: example-com-pod
labels:
app: example-com
spec:
containers:
- name: example-com-nginx
image: nginx
The example-com-service will expose the example-com-pod's port 80.
kind: Service
apiVersion: v1
metadata:
name: example-com-service
labels:
app: example-com
spec:
matchSelector:
app: example-com
ports:
- port: 80
targetPort: 80
protocol: TCP
The Nginx Ingress Controller provides a way to implement Ingress directives on a baremetal Kubernetes Cluster. These are the steps to install it (including RBAC roles) from the Kubernetes repo.
Install the namespace, default backend, and configmaps. The default backend is where all traffic without a matching host will be directed.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml
Install the controller with RBAC roles.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml
Install the service.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
Patch the service so that it uses the HostNetwork
kubectl patch deployment nginx-ingress-controller -n ingress-nginx --patch '{"spec": {"template": {"spec": {"hostNetwork": true} }
Add a tag to each worker node (k8s-000...k8s-002), for example 'k8s-node'. Next, create a Load Balancer on DigitalOcean, pointed to the 'k8s-node' tag. It will automatically attach to all of the worker droplets, including new nodes as they're added.
Helm is a management tool used to install Kubernetes container configurations.
Helm can be installed with a script from the repo. If you've used kubeadm
to setup the cluster, then you'll need to add a service account for tiller as well-
['['.
To install Helm run the scripts/get_helm.sh script from the repo.
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
Initialize Helm
helm init
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy --patch '{"spec": {"template": {"spec": {"serviceAccount": "tiller"} } } }'
Cert-manager can be installed with Helm using the Chart in the repo.
git clone https://github.com/jetstack/cert-manager
cd cert-manager
git checkout v0.2.3 #latest version as of 2018-02-19
helm install \
--name cert-manager \
--namespace kube-system \
contrib/charts/cert-manager
An Issuer is a definition of a source for certificates. We'll create an issuer for letsencrypt-staging (which should always be used for testing to avoid hitting a rate limit).
Letsencrypt Staging Issuer
kind: Issuer
apiVersion: certmanager.k8s.io/v1alpha1
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging.api.letsencrypt.org/directory
email: YOUR_EMAIL_ADDRESS
privateKeySecretRef:
name: letsencrypt-staging
http01: {}
To configure an Ingress to automatically create and use a certificate, add the following annotations and tls properties.
Add annotations to the metadata.
metadata:
annotations:
certmanager.k8s.io/acme-challenge-type: 'http01'
certmanager.k8s.io/issuer: 'letsencrypt-staging'
Add the tls hosts and secret to the spec.
spec:
tls:
- secretName: example-com-tls-staging
hosts:
- example.com
- api.example.com
- www.example.com