These are the instructions for the docker and kubernetes track of the developer day workshop on Friday W40.
Follow these and everything will be fine.
For this Workshop, you'll need the following tools installed:
To use docker, we need the daemon running!
Check that it's working with:
$ docker --version
Docker version 19.03.2, build 6a30dfca03664a0b6bf0646a7d389ee7d0318e6e
We'll use this to define a simple static network of docker containers
Check that it's working with:
$ docker-compose --version
docker-compose version 1.24.1, build 4667896
We'll run a kubernetes cluster inside a virtual host, so some virtualisation backend is necessary.
On Linux, run:
$ virsh version
Compiled against library: libvirt 5.4.0
Using library: libvirt 5.4.0
Using API: QEMU 5.4.0
Running hypervisor: QEMU 4.1.0
or for a nice GUI run virt-manager
for KVM. For VirtualBox, run VirtualBox
.
On Mac, if you have docker, you most probably have a virtualisation backend installed already.
This is the tool that will set up a small Kubernetes cluster for you inside a virtual host
Check that it's working with:
$ minikube version
minikube version: v1.2.0
This is the tool that will allow us to query the Kubernetes API.
Check that it's working with:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"archive", BuildDate:"1970-01-01T00:00:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server version may fail, that's ok because we did not setup Minikube (yet).
Tip: run source <(kubectl completion bash)
to get autocompletion. (works
with zsh
too!)
We'll work with two images: zookeeper (the backend) and a small haskell client for zookeeper.
$ docker pull zookeeper:3.5.5
3.5.5: Pulling from library/zookeeper
b8f262c62ec6: Pull complete
377e264464dd: Pull complete
3198ebe94151: Pull complete
722dfeae3f41: Pull complete
11526812f813: Pull complete
a5e75cba2a6f: Pull complete
85c8f1f12a54: Pull complete
2cf0859dd924: Pull complete
Digest: sha256:4879178a575d76d5720602f81107be4d165107ab03de0b26669605a5d39d656d
Status: Downloaded newer image for zookeeper:3.5.5
docker.io/library/zookeeper:3.5.5
Get the ZApp (for Zookeeper App) from our public docker hub repository too:
$ docker pull relexdevday/zapp:0.6.0.0
f5231496d157: Loading layer [==================================================>] 28.73MB/28.73MB
c011a58f9278: Loading layer [==================================================>] 5.755MB/5.755MB
51d4fd769508: Loading layer [==================================================>] 706.6kB/706.6kB
80eb0ed22b01: Loading layer [==================================================>] 61.44kB/61.44kB
75fe64aa61da: Loading layer [==================================================>] 143.4kB/143.4kB
4dd115f09d19: Loading layer [==================================================>] 481.3kB/481.3kB
ed11be1f501c: Loading layer [==================================================>] 2.847MB/2.847MB
5d210c56ba34: Loading layer [==================================================>] 1.341MB/1.341MB
3adf6fc4c64e: Loading layer [==================================================>] 1.219MB/1.219MB
dd4647e0a835: Loading layer [==================================================>] 225.3kB/225.3kB
5d5dfc26ffec: Loading layer [==================================================>] 10.24kB/10.24kB
6a7bc3b8ab28: Loading layer [==================================================>] 10.24kB/10.24kB
9baca0253b54: Loading layer [==================================================>] 225.3kB/225.3kB
Loaded image: zookeeper-app:latest
Here we do things manually.
Start one zk server:
$ docker run --name zk1-server --restart always -d zookeeper:3.5.5
Get that server's ip:
$ docker inspect zk1-server | grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
Start one client, using the IP address above as an argument:
$ docker run --name zk1-client -p 8080:8080 -d relexdevday/zapp:0.6.0.0 --zkHost 172.17.0.2 --zkPort 2181 --port 8080
$ curl localhost:8080
<!DOCTYPE HTML><html><head><link href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.5/css/bulma.min.css" type="text/css" rel="stylesheet"></head><body><div><div class="container header"><section class="hero"><div class="container"><p class="title">Zookeeper</p></div></section></div><section class="section"><div class="container">Page has been requested <strong>1</strong> times</div></section><div class="footer"></div></div></body></html>
Good!
Running another client is fine, see:
$ docker run --name zk1-client1 -p 8081:8080 -d relexdevday/zapp:0.6.0.0 --zkHost 172.17.0.2 --zkPort 2181 --port 8080
$ curl localhost:8081
<!DOCTYPE HTML><html><head><link href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.5/css/bulma.min.css" type="text/css" rel="stylesheet"></head><body><div><div class="container header"><section class="hero"><div class="container"><p class="title">Zookeeper</p></div></section></div><section class="section"><div class="container">Page has been requested <strong>3</strong> times</div></section><div class="footer"></div></div></body></html>
Taking the example from https://hub.docker.com/_/zookeeper:
Create the docker-compose.yaml
file with the following content:
version: '3.1'
services:
zk1-server:
image: zookeeper
restart: always
hostname: zk1-server
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zk2-server:2888:3888;2181 server.3=zk3-server:2888:3888;2181
zk2-server:
image: zookeeper
restart: always
hostname: zk2-server
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zk1-server:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zk3-server:2888:3888;2181
zk3-server:
image: zookeeper
restart: always
hostname: zk3-server
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zk1-server:2888:3888;2181 server.2=zk2-server:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
zk1-client:
image: relexdevday/zapp
hostname: zk1-client
ports:
- 8081:8080
command: --zkHost zk1-server --zkPort 2181 --port 8080
zk2-client:
image: relexdevday/zapp
hostname: zk2-client
ports:
- 8082:8080
command: --zkHost zk2-server --zkPort 2181 --port 8080
zk3-client:
image: relexdevday/zapp
hostname: zk3-client
ports:
- 8083:8080
command: --zkHost zk3-server --zkPort 2181 --port 8080
Make sure it's fine by printing the name of the services:
$ docker-compose config --services
zk1-server
zk2-server
zk3-server
zk1-client
zk2-client
zk3-client
Bring it up!
$ docker-compose up
... lotsa log lines
Creating network "kubeworkshop_default" with the default driver
Pulling zk1-server (zookeeper:)...
latest: Pulling from library/zookeeper
Digest: sha256:4879178a575d76d5720602f81107be4d165107ab03de0b26669605a5d39d656d
Status: Downloaded newer image for zookeeper:latest
Creating kubeworkshop_zk3-server_1 ... done
Creating kubeworkshop_zk1-server_1 ... done
Creating kubeworkshop_zk2-server_1 ... done
Attaching to kubeworkshop_zk1-server_1, kubeworkshop_zk3-server_1, kubeworkshop_zk2-server_1
zk1-server_1 | ZooKeeper JMX enabled by default
zk1-server_1 | Using config: /conf/zoo.cfg
zk3-server_1 | ZooKeeper JMX enabled by default
zk3-server_1 | Using config: /conf/zoo.cfg
zk2-server_1 | ZooKeeper JMX enabled by default
zk2-server_1 | Using config: /conf/zoo.cfg
zk1-server_1 | 2019-09-30 11:46:38,837 [myid:] - INFO [main:QuorumPeerConfig@133] - Reading configuration from: /conf/zoo.cfg
zk1-server_1 | 2019-09-30 11:46:38,842 [myid:] - INFO [main:QuorumPeerConfig@375] - clientPort is not set
...
After a few seconds, zookeeper servers elect a master.
Try to curl each dedicated client for each server:
$ curl localhost:8081
$ curl localhost:8082
$ curl localhost:8083
Number should be incrementing as if you were connecting to a single instance, hurray!
Note: if one of them fail for no apparent reason, I found that destroying
and recreating the who lot worked (look at stop
and rm
subcomands of
docker-compose
).
Building a static “stack” of services is easy with docker-compose. Unfortunately, this doesn't cover more complex scenarios, like replication sets, simulating network failure, declaring services and load-balancers, etc.
Here's Kubernetes to the rescue.
This was tested with minikube 1.2.0
Let's setup a local cluster with minikube (on linux):
$ minikube start --vm-driver kvm2
# With KVM / Libvirt, or just
$ minikube start --vm-driver none
# If you are on linux and have docker running (untested), or just
$ minikube start
⚠ There is a newer version of minikube available (v1.4.0). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v1.4.0
To disable this notification, run the following:
minikube config set WantUpdateNotification false
😄 minikube v1.2.0 on linux (amd64)
💿 Downloading Minikube ISO ...
129.33 MB / 129.33 MB [============================================] 100.00% 0s
🔥 Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳 Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
💾 Downloading kubeadm v1.15.0
💾 Downloading kubelet v1.15.0
🚜 Pulling images ...
🚀 Launching Kubernetes ...
⌛ Verifying: apiserver proxy etcd scheduler controller dns
🏄 Done! kubectl is now configured to use "minikube"
Here my minikube is older than the latest. It doesn't matter for this workshop though.
After setup, you should have something like:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 9m22s v1.15.0
Note that minikube will set the default context to use to “minikube” for you:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If it is not set properly, you can set it yourself by calling kubectl config use-context minikube
.
Let's rebuild the simple setup we had previously
$ kubectl create deployment single-zk --image=zookeeper
deployment.apps/single-zk created
More details? Use describe
!
$ kubectl describe deployment single-zk
Name: single-zk
Namespace: default
CreationTimestamp: Tue, 01 Oct 2019 08:15:10 +0300
Labels: app=single-zk
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=single-zk
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=single-zk
Containers:
zookeeper:
Image: zookeeper
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: single-zk-59d7d6f9b7 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set single-zk-59d7d6f9b7 to 1
Is the pod running?
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
single-zk-59d7d6f9b7-8wzhp 1/1 Running 0 51s
More information about your pod:
$ kubectl get pod single-zk-59d7d6f9b7-8wzhp -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-10-01T05:15:10Z"
generateName: single-zk-59d7d6f9b7-
labels:
app: single-zk
pod-template-hash: 59d7d6f9b7
name: single-zk-59d7d6f9b7-8wzhp
...
Even more information about your pod:
$ kubectl describe pod single-zk-59d7d6f9b7-8wzhp
Name: single-zk-59d7d6f9b7-8wzhp
Namespace: default
Priority: 0
Node: minikube/192.168.122.124
Start Time: Tue, 01 Oct 2019 08:15:10 +0300
Labels: app=single-zk
pod-template-hash=59d7d6f9b7
Annotations: <none>
Status: Running
IP: 172.17.0.4
Controlled By: ReplicaSet/single-zk-59d7d6f9b7
Containers:
zookeeper:
Container ID: docker://3981b1b6ed783f56c380ca45b89969b1abb90ab14aaec0ba5517f31723978c8c
...
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned default/single-zk-59d7d6f9b7-8w
zhp to minikube
Normal Pulling 1m kubelet, minikube Pulling image "zookeeper"
Normal Pulled 1m kubelet, minikube Successfully pulled image "zookeeper"
Normal Created 1m kubelet, minikube Created container zookeeper
Normal Started 1m kubelet, minikube Started container zookeeper
So yes, it's up. Can we connect to it? Not really, as we haven't published any ports.
$ kubectl expose deployment single-zk --type=NodePort --port=2181
service/single-zk exposed
Hmm, this says service, what's it?
$ kubectl describe service single-zk
Name: single-zk
Namespace: default
Labels: app=single-zk
Annotations: <none>
Selector: app=single-zk
Type: NodePort
IP: 10.110.34.118
Port: <unset> 2181/TCP
TargetPort: 2181/TCP
NodePort: <unset> 31169/TCP
Endpoints: 172.17.0.4:2181
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Ok! Time to create a client. This time, there is more configuration needed, so
instead of running kubectl create deployment
with just an image, we'll write
some proper configuration!
apiVersion: apps/v1
kind: Deployment
metadata:
name: zkclient
spec:
selector:
matchLabels:
app: zkclient
replicas: 1
template:
metadata:
labels:
app: zkclient
spec:
containers:
- name: zkclient
image: relexdevday/zapp:0.6.0.0
args: ["--zkHost", single-zk, "--zkPort", "2181", "--port", "8080"]
Let's try it out:
$ kubectl apply -f deployments/zkclient.yaml
deployment.apps/zkclient created
Let's also expose port 8080 to our host, so that we can use it in a browser:
$ kubectl expose deployment zkclient --type=NodePort --port=8080
service/zkclient exposed
$ minikube service zkclient --url
http://192.168.39.207:32284
$ curl http://192.168.39.207:32284
<!DOCTYPE HTML><html><head><link href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.5/css/bulma.min.css" type="text/css" rel="stylesheet"></head><body><div><div class="container header"><section class="hero"><div class="container"><p class="title">Zookeeper</p></div></section></div><section class="section"><div class="container">Page has been requested <strong>13</strong> times</div></section><div class="footer"></div></div></body></html>
Now we have one client (our zapp) talking to one Zookeeper Server.
What if we wanted more clients? Let's just change the replicas: 1
to a
replicas: 3
and run apply again
.
$ kubectl apply -f deployments/zkclient.yaml
deployment.apps/zkclient configured
Let's see the pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
single-zk-59d7d6f9b7-8wzhp 1/1 Running 0 87m
zkclient-86cd76bd79-dwd26 1/1 Running 0 7m40s
zkclient-86cd76bd79-q77bb 1/1 Running 0 40s
zkclient-86cd76bd79-rdj2d 1/1 Running 0 40s
Since our service is tied to the deployment and not to a single pod, our URL is still working, and this time it acts as a load balancer between our different clients.
Let's ramp it up!
NB: We're not going to be at the same level than: https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/
Let's define a deployment for a 3 zk-server cluster like we did before with docker-compose, here's a fragment for the first server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper1
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper1
template:
metadata:
labels:
app: zookeeper1
group: zk-cluster
spec:
containers:
- name: zookeeper1
image: zookeeper
ports:
- name: zkp1
containerPort: 2181
- name: zkp2
containerPort: 2888
- name: zkp3
containerPort: 3888
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: "server.1=0.0.0.0:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181"
Note that we define 2 labels, one for this deployment, the other for the cluster. It's going to be useful for defining services later on.
Speaking of services, here's a fragment for a service targetting this first server:
apiVersion: v1
kind: Service
metadata:
name: zk1
spec:
selector:
app: zookeeper1
ports:
- protocol: TCP
port: 2181
name: zkp1
- protocol: TCP
port: 2888
name: zkp2
- protocol: TCP
port: 3888
name: zkp3
We'll let the description for the 2 other servers as an exercice.
Once you're done, there's another service we can define: one that will talk to any zookeeper server in the cluster:
apiVersion: v1
kind: Service
metadata:
name: zk-all
spec:
selector:
group: zk-cluster
ports:
- protocol: TCP
port: 2181
name: zkp1
- protocol: TCP
port: 2888
name: zkp2
- protocol: TCP
port: 3888
name: zkp3
We can do the same dance for the clients:
Create a deployment for the client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zkclient
spec:
selector:
matchLabels:
app: zkclient
replicas: 1
template:
metadata:
labels:
app: zkclient
spec:
containers:
- name: zkclient
image: relexdevday/zapp:0.6.0.0
ports:
- name: zappp
containerPort: 8080
args: ["--zkHost", zk-all, "--zkPort", "2181", "--port", "8080"]
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Note: that in this example, we are targeting zk-all
service, which refers to
any of the 3 zookeeper pods. That is fine, since they should provide the same
information anyway.
Next we can create a service to point to the zapp
:
apiVersion: v1
kind: Service
metadata:
name: zkclient
spec:
selector:
app: zkclient
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: zappp
Note: we are using the type NodePort
in order to advertise this service using
by opening a port in the node. Minikube will be able to route this for us
from outside of the VM (in which the master node is running).
We can submit these to the K8S cluster using apply
:
$ kubectl apply -f zk-client-deployment.yaml -f zk-client-service.yaml
Let's as minikube for the url for this service:
$ minikube service zkclient --url
http://192.168.39.207:31920
If we curl this, we should have access to the application!
We can add the following liveness and readiness probes to our zapp
deployment,
as the small app supports it:
livenessProbe:
httpGet:
path: /health
port: zappp
readinessProbe:
httpGet:
path: /health/ready
port: zappp