Install docker using install instructions at https://docs.docker.com/engine/install/ubuntu/:
$ sudo apt-get install -y \
ca-certificates \
curl \
gnupg &&
$ sudo mkdir -m 0755 -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
$ sudo usermod -a -G docker $USER
Log out and log in again for the group membership to take effect.
Download the latest version of kind
:
$ curl -LO https://github.com/kubernetes-sigs/kind/releases/latest/download/kind-linux-amd64
$ chmod +x kind-linux-amd64
$ sudo mv kind-linux-amd64 /usr/local/bin/kind
Install kubectl
:
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ chmod +x kubectl
$ sudo mv kubectl /usr/local/bin/
The simplest way to get up and running quickly is to create a default single-node cluster. This will require no configuration and will be ready to use in seconds. In the first run, it will download the Kubernetes docker images to the host machine, so it will take a bit longer. Next time it will be faster, even to the point that you can consider clusters as "throw-away" resources that you spin up at will, many in parallel, experiment with, break, and finally destroy when done.
$ kind create cluster
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.27.3) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community π
Check that your single-node cluster is up and running:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 17m v1.27.3
Next, launch a pod in the cluster and connect to the pod:
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: shell
labels:
app: shell
spec:
containers:
- name: shell
image: alpine:latest
command: ["/bin/sh"]
args:
- "-c"
- "apk add --update-cache curl && /bin/sleep 99999999"
EOF
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
shell 1/1 Running 0 20s
$ kubectl exec -it shell -- ash
/ # ps -ef
PID USER TIME COMMAND
1 root 0:00 /bin/sleep 99999999
19 root 0:00 ash
25 root 0:00 ps -ef
/ #
Kind supports preloading images into the cluster nodes. This is useful if you want to use images that are not available in a registry. For example, you can build a custom image on your machine and load it into the cluster:
$ cat >Dockerfile <<EOF
FROM alpine:latest
RUN apk add --update-cache python3
CMD ["python3", "-m", "http.server", "80"]
EOF
$ docker build -t mywebserver:latest .
$ kind load docker-image mywebserver:latest
$ kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebserver
spec:
selector:
matchLabels:
app.kubernetes.io/name: mywebserver
template:
metadata:
labels:
app.kubernetes.io/name: mywebserver
spec:
containers:
- name: mywebserver
image: mywebserver:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mywebserver
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app.kubernetes.io/name: mywebserver
EOF
$ kubectl exec -it shell -- curl http://mywebserver/etc/passwd
Note that imagePullPolicy: Never
was used to make use of the locally uploaded image, and to avoid looking up for the image from dockerhub.
After finishing, you may delete the cluster:
$ kind delete cluster
Deleting cluster "kind" ...
If you reboot your machine, the clusters that were left running will be automatically restarted.
Here is a brief list of commands that you can use to manage your clusters:
Command | Description |
---|---|
kind create cluster --config [CONFIG] --name [CLUSTER] |
Create a new cluster |
kind get clusters |
List clusters running currently |
kind delete cluster --name [CLUSTER] |
Delete cluster |
kind export kubeconfig --name [CLUSTER] |
Set kubectl context |
kind load docker-image [IMAGE] --name [CLUSTER] |
Upload image to cluster |
For more information, see
- https://kind.sigs.k8s.io/
- https://kind.sigs.k8s.io/docs/user/quick-start/
- https://github.com/kubernetes-sigs/kind The github project home for Kind.
Kind is "Kubernetes in Docker": the cluster nodes are nothing more than docker containers running on your machine.
We can list the emulated Kubernetes nodes by using docker ps
.
By default, there will be just one node: the control plane
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c020740ba487 kindest/node:v1.27.3 "/usr/local/bin/entrβ¦" 23 minutes ago Up 23 minutes 127.0.0.1:44747->6443/tcp kind-control-plane
You can connect to a node and e.g. list the logs from the kubelet
service to troubleshoot the worker node:
$ docker exec -it kind-control-plane bash
root@kind-control-plane:/# journalctl -u kubelet
Kind supports custom cluster configuration. This allows you to create clusters with multiple nodes (as many you like), or cluster using older Kubernetes versions.
$ cat >kind-cluster-config.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
EOF
$ kind create cluster --name mycluster --config kind-cluster-config.yaml
Creating cluster "mycluster" ...
β Ensuring node image (kindest/node:v1.27.3) πΌ
β Preparing nodes π¦ π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
β Joining worker nodes π
Set kubectl context to "kind-mycluster"
You can now use your cluster with:
kubectl cluster-info --context kind-mycluster
Not sure what to do next? π
Check out https://kind.sigs.k8s.io/docs/user/quick-start/
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b8b8cf8effa0 kindest/node:v1.27.3 "/usr/local/bin/entrβ¦" 49 seconds ago Up 47 seconds mycluster-worker
11744f1f381a kindest/node:v1.27.3 "/usr/local/bin/entrβ¦" 49 seconds ago Up 47 seconds 127.0.0.1:45511->6443/tcp mycluster-control-plane
Note that there are two nodes: the control plane and a worker node.
The names of the docker containers are derived from the cluster name, this is why they are prefixed with mycluster-
.
This allows you to run multiple clusters in parallel without name conflicts.
The following configuration file creates a cluster with an older Kubernetes version:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.21.14@sha256:8a4e9bb3f415d2bb81629ce33ef9c76ba514c14d707f9797a01e3216376ba093
- role: worker
image: kindest/node:v1.21.14@sha256:8a4e9bb3f415d2bb81629ce33ef9c76ba514c14d707f9797a01e3216376ba093
For the list of available images and their tags, see the release notes of the kind
version you are using https://github.com/kubernetes-sigs/kind/releases.
See also:
- https://kind.sigs.k8s.io/docs/user/configuration/ Kind configuration.
In this chapter, we launch a bit more advanced cluster with multiple nodes and port mappings that allow accessing services running in the cluster from the host machine. We also install Contour ingress controller into the cluster to handle HTTP and HTTPS requests and an example backend service to respond to those requests. We also configure TLS both for Contour and the backend service.
First, create a configuration file for kind
$ cat >kind-cluster-config.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "127.0.0.101"
- containerPort: 443
hostPort: 443
listenAddress: "127.0.0.101"
EOF
Two nodes are to be created: control-plane and worker.
The worker is configured to listen for inbound traffic at 127.0.0.101
which is within the address range of your machine's loopback address space.
Create a new cluster and name it contour
:
$ kind create cluster --config kind-cluster-config.yaml --name contour
Creating cluster "contour" ...
β Ensuring node image (kindest/node:v1.27.3) πΌ
β Preparing nodes π¦ π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
β Joining worker nodes π
Set kubectl context to "kind-contour"
You can now use your cluster with:
kubectl cluster-info --context kind-contour
Have a nice day! π
Install Contour ingress controller:
$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
To generate test certificates, first download certyaml
tool:
$ curl -LO https://github.com/tsaarni/certyaml/releases/latest/download/certyaml-linux-amd64.tar.gz
$ tar zxvf certyaml-linux-amd64.tar.gz
$ chmod +x certyaml
$ sudo mv certyaml /usr/local/bin/
Create a configuration file for a PKI setup with two CAs (cluster external and cluster internal) and server certificates for the Contour ingress controller, and for the echoserver backend service.
$ cat >certs.yaml <<EOF
subject: cn=external-root-ca
---
subject: cn=ingress
issuer: cn=external-root-ca
sans:
- DNS:echoserver.127-0-0-101.nip.io
---
subject: cn=internal-root-ca
---
subject: cn=echoserver
issuer: cn=internal-root-ca
sans:
- DNS:echoserver
EOF
Run certyaml
with that configuration file to generate certificates:
$ mkdir certs
$ certyaml --destination certs certs.yaml
Loading manifest file: certs.yaml
Reading certificate state file: certs/certs.state
Writing: certs/external-root-ca.pem certs/external-root-ca-key.pem
Writing: certs/ingress.pem certs/ingress-key.pem
Writing: certs/internal-root-ca.pem certs/internal-root-ca-key.pem
Writing: certs/echoserver.pem certs/echoserver-key.pem
Writing state: certs/certs.state
Upload the certificates and private keys as Kubernetes secrets:
$ kubectl create secret tls ingress --cert=certs/ingress.pem --key=certs/ingress-key.pem --dry-run=client -o yaml | kubectl apply -f -
$ kubectl create secret tls echoserver --cert=certs/echoserver.pem --key=certs/echoserver-key.pem --dry-run=client -o yaml | kubectl apply -f -
$ kubectl create secret generic internal-root-ca --from-file=ca.crt=certs/internal-root-ca.pem --dry-run=client -o yaml | kubectl apply -f -
Deploy backend service
$ cat >echoserver.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
spec:
selector:
matchLabels:
app.kubernetes.io/name: echoserver
template:
metadata:
labels:
app.kubernetes.io/name: echoserver
spec:
containers:
- name: echoserver
image: quay.io/tsaarni/echoserver:demo
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: TLS_SERVER_CERT
value: /run/secrets/certs/tls.crt
- name: TLS_SERVER_PRIVKEY
value: /run/secrets/certs/tls.key
ports:
- name: https-api
containerPort: 8443
volumeMounts:
- mountPath: /run/secrets/certs/
name: echoserver-cert
readOnly: true
volumes:
- name: echoserver-cert
secret:
secretName: echoserver
---
apiVersion: v1
kind: Service
metadata:
name: echoserver
spec:
ports:
- name: https
port: 443
targetPort: https-api
selector:
app.kubernetes.io/name: echoserver
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: protected
spec:
virtualhost:
fqdn: echoserver.127-0-0-101.nip.io
tls:
secretName: ingress
routes:
- services:
- name: echoserver
port: 443
protocol: tls
validation:
subjectName: echoserver
caSecret: internal-root-ca
EOF
$ kubectl apply -f echoserver.yaml
deployment.apps/echoserver created
service/echoserver created
httpproxy.projectcontour.io/protected created
Wait for a while for the backend pod to start, and then make an HTTP request:
$ curl --cacert certs/external-root-ca.pem https://echoserver.127-0-0-101.nip.io/
For more information, see
- https://projectcontour.io/ See the
HTTPProxy
fundamentals section in Documentation for how to create your own request routing. - https://github.com/tsaarni/certyaml Create certificate hierarchies easily.
- https://nip.io/ DNS service that provides FQDNs for any IP address.
Kind supports persistent volumes out of the box.
TODO
- examples
- map PV to host mount to make it available on all nodes
To learn more, see
- https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
- https://github.com/rancher/local-path-provisioner The provisioner that Kind uses.
Note that this example requires that contour
and echoserver
from the previous example are running.
Since the processes running in pods inside kind
cluster are just local processes on your machine, sharing the same kernel, you can access the process directly in many ways.
This can become especially useful for debugging.
The following examples show how to access filesystem and network namespace of processes from the host.
First, take note that the echoserver
image does not have a shell so we cannot use kubectl exec
to access the filesystem of the pod:
$ kubectl exec -it $(kubectl get pod -l app.kubernetes.io/name=echoserver -o jsonpath='{.items[0].metadata.name}') -- /bin/sh
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "0987894e0fd44763f0028a21c03be1b8bcdf2c09f8448ad6e9e3e0b159606e97": OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
To access the filesystem of a pod directly from the host machine you can use /proc/<pid>/root/
.
It shows the filesystem from the viewpoint of the process.
$ sudo ls -l /proc/$(pgrep -f echoserver)/root/
total 4900
drwxr-xr-x 2 root root 4096 Apr 2 11:55 bin
drwxr-xr-x 2 root root 4096 Apr 2 11:55 boot
drwxr-xr-x 5 root root 360 Aug 10 17:12 dev
-rwxr-xr-x 1 root root 4960256 Aug 8 11:43 echoserver
drwxr-xr-x 1 root root 4096 Aug 10 17:12 etc
drwxr-xr-x 1 65532 65532 4096 Jan 1 1970 home
drwxr-xr-x 2 root root 4096 Apr 2 11:55 lib
dr-xr-xr-x 232 root root 0 Aug 10 17:12 proc
-rw-r--r-- 1 root root 5 Aug 10 17:12 product_name
-rw-r--r-- 1 root root 37 Aug 10 17:12 product_uuid
drwx------ 1 root root 4096 Jan 1 1970 root
drwxr-xr-x 1 root root 4096 Aug 10 17:12 run
drwxr-xr-x 2 root root 4096 Apr 2 11:55 sbin
dr-xr-xr-x 13 root root 0 Aug 10 17:12 sys
drwxrwxrwt 1 root root 4096 Aug 10 17:12 tmp
drwxr-xr-x 1 root root 4096 Jan 1 2000 usr
drwxr-xr-x 1 root root 4096 Jan 1 1970 var
Commands that only exist in the host can be executed inside pod's namespaces using nsenter
.
For example, to run ip
to list what network interfaces exist inside the pod:
$ sudo nsenter --target $(pgrep -f echoserver) --net ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 72:40:68:ec:38:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.1.10/24 brd 10.244.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::7040:68ff:feec:38e2/64 scope link
valid_lft forever preferred_lft forever
The echoserver
service is a Go application that supports Wireshark TLS decryption by writing per-session keys to file /tmp/wireshark-keys.log
inside the pod filesystem.
We can run wireshark
on the host, but let it access pod's network namespace to capture network traffic as seen inside the pod, and let it read the TLS session keys file inside the pod to decrypt the TLS traffic:
$ sudo nsenter -t $(pgrep -f "echoserver") --net wireshark -f "port 8443" -k -o tls.keylog_file:/proc/$(pgrep -f "echoserver")/root/tmp/wireshark-keys.log
$ curl --cacert certs/external-root-ca.pem https://echoserver.127-0-0-101.nip.io/
For more information, see
- https://wiki.wireshark.org/TLS#tls-decryption Wireshark TLS decryption.
- https://pkg.go.dev/crypto/tls#example-Config-KeyLogWriter How to write TLS master secrets in Go into Wireshark compatible file.