Ramp up your Kubernetes development, CI-tooling or testing workflow by running multiple Kubernetes clusters on Ubuntu Linux with KVM and minikube.
In this tutorial we will combine the popular minikube
tool with Linux's Kernel-based Virtual Machine (KVM) support. It is a great way to re-purpose an old machine that you found on eBay or have gathering gust under your desk. An Intel NUC would also make a great host for this tutorial if you want to buy some new hardware. Another popular angle is to use a bare metal host in the cloud and I've provided some details on that below.
We'll set up all the tooling so that you can build one or many single-node Kubernetes clusters and then deploy applications to them such as OpenFaaS using familiar tooling like helm. I'll then show you how to access the Kubernetes clusters from a remote machine such as your laptop.
- This tutorial uses Ubuntu 16.04 as a base installation, but other distributions are supported by KVM. You'll need to find out how to install KVM with your package manager. If you're using Fedora you can follow these instructions to Install KVM.
- You'll need nested virtualization available on a cloud host, a spare machine under your desk or a bare metal machine in the cloud. You can find affordable bare metal at Scaleway or high-spec/performance bare-metal over at: Packet.net
Run all of these commands on your Linux host unless otherwise specified.
KVM enables virtualization on Linux, but other options are also available for use with minikube such as: virtualbox/vmwarefusion/kvm/xhyve/hyperv.
Follow instructions here to install packages from apt
:
sudo apt-get install -qy \
qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker
sudo kvm-ok
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
curl -SLO https://github.com/kubernetes/minikube/releases/download/v0.28.2/docker-machine-driver-kvm2
curl -SLO https://github.com/kubernetes/minikube/releases/download/v0.28.2/minikube-linux-amd64
chmod +x docker-machine-driver-kvm2
chmod +x minikube-linux-amd64
sudo mv docker-machine-driver-kvm2 /usr/local/bin
sudo mv minikube-linux-amd64 /usr/local/bin/minikube
Using the kubeadm bootstrapper will enable RBAC
Create your first cluster VM:
minikube start --bootstrapper=kubeadm --vm-driver=kvm2 --memory 4096 --cpus 4 --profile cluster1
You can set up additional separate VMs using the --profile
flag.
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
You can manage each cluster using kubectl
by switching between contexts saved in ~/.kube/config.yaml. The kubectx tool is also popular in the community for switching between these quickly.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster1 cluster1 cluster1
Pass the --profile
flag to minikube commands so that you can access the ports you want.
minikube ip --profile cluster1
192.168.39.10
You may want to use SSH port forwarding with ssh -L port:port [email protected]
or kubectl port-forward
to give access to services and NodePorts available on the minikube VM. If you're comfortable with iptables
you could also set up some NAT rules, but I would recommend against it since the IP addresses of your minikube environments may change when being restarted.
You can now install something to test the cluster.
- Configure helm and tiller:
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
kubectl -n kube-system create sa tiller \
&& kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
helm init --skip-refresh --upgrade --service-account tiller
- Setup OpenFaaS via helm:
kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
helm repo add openfaas https://openfaas.github.io/faas-netes/
helm repo update && helm upgrade openfaas \
--install openfaas/openfaas \
--namespace openfaas \
--set functionNamespace=openfaas-fn
- Now access the OpenFaaS gateway via the NodePort:
curl http://$(minikube ip --profile cluster1):31112/system/info
{"provider":{"provider":"faas-netes","version":{"sha":"5539cf43c15a28e9af998cdc25b5da06252b62e1","release":"0.6.0"},"orchestration":"kubernetes"},"version":{"commit_message":"Attach X-Call-Id to asynchronous calls","sha":"c86de503c7a20a46645239b9b081e029b15bf69b","release":"0.8.11"}}
You can also gain access into the cluster(s) from a remote machine.
Port forward from your laptop to the Linux machine:
Find the IP of the minikube VM with echo $(minikube ip --profile cluster1)
i.e. 192.168.39.10
ssh -L -N 31112:192.168.39.10:31112 user@linux-host
You can now access the OpenFaaS installation from your remote Linux host via http://127.0.0.1:31112 or even using faas-cli --gateway 127.0.0.1:31112
.
You can also open up your OpenFaaS installation to your friends or for testing public webhooks via ngrok. Run the tool on your Linux host
./ngrok http $(minikube ip --profile cluster1):31112
We've now built one or many Kubernetes clusters using minikube
and KVM using the --profile
flag to separate them and assign each its own name. Port-forwarding, ngrok or SSH provided us with temporary access into the clusters for testing purposes. Where could you take this text?
After you have gained some muscle-memory with creating and accessing these development clusters, you could go on to bake them into projects. A single-use cluster would be great to use in a CI/CD pipeline for end-to-end testing or any other tasks that require multiple configurations such as ensuring backwards compatibility with different Kubernetes versions or with RBAC enabled or disabled.
If you have comments, questions or suggestions feel free to reach out over Twitter @alexellisuk