- Docker Engine 19.03+ (tested successfully with Docker4Mac or Docker Linux)
k3d
binary installed on your PATH (https://github.com/rancher/k3d)kubectl
binary installed on your PATH (https://kubernetes.io/docs/tasks/tools/install-kubectl/)- A bash prompt
- Using the
k3d
command, create a Kubernetes cluster with the following properties:- 1 master node and 3 worker nodes,
- ports 80 and 443 published on the
localhost
interface of your host machine,
k3d create --publish="80:80" --publish="443:443" --workers="3"
- 4 docker containers are now running: 1 for the master node and 3 for the worker nodes:
docker ps
- Use the configuration generated by
k3d
forkubectl
, by following the instructions fromk3d
output:
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info
You should see the following output:
Kubernetes master is running at https://localhost:6443
CoreDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
- Check the cluster topology:
kubectl get nodes
# You should see 1 master and 3 workers
- Check the pods running, cluster wide:
$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-b7464766c-mtj2k 1/1 Running 0 3m45s
kube-system helm-install-traefik-bw2tg 0/1 Completed 0 3m46s
kube-system svclb-traefik-9fknp 2/2 Running 0 3m27s
kube-system svclb-traefik-ppffn 2/2 Running 0 3m27s
kube-system svclb-traefik-vxdlk 2/2 Running 0 3m27s
kube-system svclb-traefik-z7wfh 2/2 Running 0 3m27s
kube-system traefik-56688c4464-r8ctk 1/1 Running 0 3m27s
- Open the URL http://localhost: you should see a
404 page not found
page. It means an HTTP server is answering in the cluster on port 80.
- Create the namespace named
whoami
:
$ kubectl get namespaces
NAME STATUS AGE
default Active 5m56s
kube-node-lease Active 5m56s
kube-public Active 5m56s
kube-system Active 5m56s
# No "whoami" namespace
$ kubectl apply -f ./whoami-namespace.yml
namespace/whoami created
$ kubectl get namespaces
NAME STATUS AGE
default Active 6m46s
kube-node-lease Active 6m46s
kube-public Active 6m46s
kube-system Active 6m46s
whoami Active 14s
# New "whoami" namespace
- Create the "deployment", which should create 2 pods (because 2 replicas).
The "whoami" container is a web application listening on its port
80
which respond to HTTPGET
requests by responding (body) the headers of the request:
$ kubectl get pod --namespace=whoami
No resources found in whoami namespace.
$ kubectl apply -f ./whoami-deployment.yml
deployment.extensions/whoami created
kubectl get pod --namespace=whoami
NAME READY STATUS RESTARTS AGE
whoami-756586b9ff-h4vlq 1/1 Running 0 8s
whoami-756586b9ff-j6sqg 1/1 Running 0 8s
- Expose + Load Balance the 2 replicated pod of the deployment by creating a new "service":
$ kubectl get service --namespace=whoami
No resources found in whoami namespace.
$ kubectl apply -f ./whoami-service.yml
service/whoami created
$ kubectl get service --namespace=whoami
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whoami ClusterIP <IP> <none> 80/TCP 13s
- Spawn a new pod in the same namespace, with an interactive shell
to request the "service" and see loadbalancing happening (look at
IP:
field on the response body):
kubectl run --tty -i --rm --image=alpine --namespace=whoami -- sh
(...)
/ # apk add --no-cache curl # Install curl
(...)
/ # curl http://<IP>:80 # Wher <IP> is the "cluster IP" of the service
(...)
/ # curl http://<IP>:80 # Wher <IP> is the "cluster IP" of the service
(...)
/ # exit
(...)
deployment.apps "sh" deleted
- Publish the "service" to outside the cluster by creating a new "Ingress", that will be implemented by Traefik, the Ingress Controller of the cluster:
$ kubectl get ingress --namespace=whoami
No resources found in whoami namespace.
$ kubectl apply -f whoami-ingress.yml
ingress.extensions/whoami created
$ kubectl get ingress --namespace=whoami
NAME HOSTS ADDRESS PORTS AGE
whoami localhost 172.30.0.2 80 7s
- You can now open the URL http://localhost:80 and access the web service.
If you reload the page, you'll see the same "load balancing" behavior as earlier with
curl
.
Destroy the cluster with k3d delete
(check with docker ps
).
Good job on this!!