Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save DavidPesticcio/a80b1ce3cbaca19da4e4a914da3e7f2e to your computer and use it in GitHub Desktop.
Save DavidPesticcio/a80b1ce3cbaca19da4e4a914da3e7f2e to your computer and use it in GitHub Desktop.
Certified Kubernetes Application Developer Notes

Certified Kubernetes Application Developer

Kubernetes Playground

Create a namespace called 'mynamespace' and a pod with image nginx called nginx on this namespace

show

kubectl create namespace mynamespace
kubectl run nginx --image=nginx --restart=Never -n mynamespace

Create the pod that was just described using YAML

show

Easily generate YAML with:

kubectl run nginx --image=nginx --restart=Never -n mynamespace --dry-run -o yaml > pod.yaml
cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml

Create a busybox pod (using YAML) that runs the command "env". Run it and see the output

show

kubectl run busybox --image=busybox --command --restart=Never -it -- env # -it will help in seeing the output
# or, just run it without -it
kubectl run busybox --image=busybox --command --restart=Never -- env
# and then, check its logs
kubectl logs busybox

Get the YAML for a new namespace called 'myns' without creating it

show

kubectl create namespace myns -o yaml --dry-run

Get the YAML for a new ResourceQuota called 'myrq' without creating it

show

kubectl create resourcequota myrq -o yaml --dry-run

Get pods on all namespaces

show

kubectl get po --all-namespaces

Create a pod with image nginx called nginx and allow traffic on port 80

show

kubectl run nginx --image=nginx --restart=Never --port=80

Change pod's image to nginx:1.7.1. Observe that the pod will be killed and recreated as soon as the image gets pulled

show

# kubectl set image POD_NAME CONTAINER_NAME=IMAGE_NAME:TAG
kubectl set image nginx nginx=nginx:1.7.1
kubectl describe po nginx # you will see an event 'Container will be killed and recreated'
kubectl get po nginx -w # watch it

Get the pod's ip, use a temp busybox image to wget its '/'

show

kubectl get po -o wide # get the IP, will be something like '10.1.1.131'
# create a temp busybox pod
kubectl run busybox --image=busybox --rm -it --restart=Never -- sh
# run wget on specified IP:Port
wget -O- 10.1.1.131:80
exit

Get this pod's YAML without cluster specific information

show

kubectl get po nginx -o yaml --export

Get information about the pod, including details about potential issues (e.g. pod hasn't started)

show

kubectl describe po nginx

Get pod logs

show

kubectl logs nginx

If pod crashed and restarted, get logs about the previous instance

show

kubectl logs nginx -p

Connect to the nginx pod

show

kubectl exec -it nginx -- /bin/sh

Create a busybox pod that echoes 'hello world' and then exits

show

kubectl run busybox --image=busybox -it --restart=Never -- echo 'hello world'
# or
kubectl run busybox --image=busybox -it --restart=Never -- /bin/sh -c 'echo hello world'

Do the same, but have the pod deleted automatically when it's completed

show

kubectl run busybox --image=busybox -it --rm --restart=Never -- /bin/sh -c 'echo hello world'
kubectl get po # nowhere to be found :)

Create an nginx pod and set an env value as 'var1=val1'. Check the env value existence within the pod

show

kubectl run nginx --image=nginx --restart=Never --env=var1=val1
# then
kubectl exec -it nginx -- env
# or
kubectl describe nginx | grep val1

# or

kubectl run po busybox --restart=Never --image=busybox --env=var1=val1 -it --rm -- env

Multi-container Pods (10%)

Create a Pod with two containers, both with image busybox and command "echo hello; sleep 3600". Connect to the second container and run 'ls'

show

Easiest way to do it is create a pod with a single container and save its definition in a YAML file:

kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run -- /bin/sh -c 'echo hello;sleep 3600' > pod.yaml
vi pod.yaml

Copy/paste the container related values, so your final YAML should contain the following two containers (make sure those containers have a different name):

containers:
  - args:
    - /bin/sh
    - -c
    - echo hello;sleep 3600
    image: busybox
    imagePullPolicy: IfNotPresent
    name: busybox
    resources: {}
  - args:
    - /bin/sh
    - -c
    - echo hello;sleep 3600
    image: busybox
    name: busybox2
kubectl create -f pod.yaml
# Connect to the busybox2 container within the pod
kubectl exec -it busybox -c busybox2 -- /bin/sh
ls
exit
# you can do some cleanup
kubectl delete po busybox

Pod design (20%)

Labels and annotations

Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1

show

kubectl run nginx1 --image=nginx --restart=Never --labels=app=v1
kubectl run nginx2 --image=nginx --restart=Never --labels=app=v1
kubectl run nginx3 --image=nginx --restart=Never --labels=app=v1

Show all labels of the pods

show

kubectl get po --show-labels

Change the labels of pod 'nginx2' to be app=v2

show

kubectl label po nginx2 app=v2 --overwrite

Get the label 'app' for the pods

show

kubectl get po -L app

Get only the 'app=v2' pods

show

kubectl get po -l app=v2
# or
kubectl get po -l 'app in (v2)'

Remove the 'app' label from the pods we created before

show

kubectl label po nginx1 nginx2 nginx3 app-
# or
kubectl label po nginx{1..3} app-

Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'

show

We can use the 'nodeSelector' property on the Pod YAML:

apiVersion: v1
kind: Pod
metadata:
  name: cuda-test
spec:
  containers:
    - name: cuda-test
      image: "k8s.gcr.io/cuda-vector-add:v0.1"
  nodeSelector: # add this
    accelerator: nvidia-tesla-p100 # the slection label

You can easily find out where in the YAML it should be placed by:

kubectl explain po.spec

Annotate pods nginx1, nginx2, ngingx3 with "description='my description'" value

show

kubectl annotate po nginx1 nginx2 nginx3 description='my description'

Check the annotations for pod nginx1

show

kubectl describe po nginx1 | grep -i 'annotations'

Remove the annotations fot these three pods

show

kubectl annotate po nginx{1..3} description-

Remove these pods to have a clean state in your cluster

show

kubectl delete po nginx{1..3}

Deployments

Create a deployment with image nginx:1.7.8, 2 replicas, defining port 80 as the port that this container exposes (don't create a service for this deployment)

show

kubectl run nginx --image=nginx:1.7.8 --replicas=2 --port=80

View the YAML of this deployment

show

kubectl get deploy nginx --export -o yaml

View the YAML of the replica set that was created by this deployment

show

kubectl describe deploy nginx # you'll see the name of the replica set on the Events section and in the 'NewReplicaSet' property
# you could also just do kubectl get rs
kubectl get rs nginx-7bf7478b77 --export -o yaml

Get the YAML for one of the pods

show

kubectl get po # get all the pods
kubectl get po nginx-7bf7478b77-gjzp8 -o yaml --export

Check how the deployment rollout is going

show

kubectl rollout status deploy nginx

Update the nginx image to nginx:1.7.9

show

kubectl set image deploy nginx nginx=nginx:1.7.9
# alternatively...
kubectl edit deploy nginx # change the .spec.template.spec.containers[0].image

The syntax of the 'kubectl set image' command is kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N [options]

Check the rollout history and confirm that the replicas are OK

show

kubectl rollout history deploy nginx
kubectl get deploy nginx
kubectl get rs # check that a new replica set has been created
kubectl get po

Undo the latest rollout and verify that new pods have the old image (nginx:1.7.8)

show

kubectl rollout undo deploy nginx
# wait a bit
kubectl get po # select one 'Running' Pod
kubectl describe po nginx-5ff4457d65-nslcl | grep -i image # should be nginx:1.7.8

Do an on purpose update of the deployment with a wrong image nginx:1.91

show

kubectl set image deploy nginx nginx=nginx:1.91
# or
kubectl edit deploy nginx
# change the image to nginx:1.91
# vim tip: type (without quotes) '/image' and Enter, to navigate quickly

Verify that something's wrong with the rollout

show

kubectl rollout status deploy nginx
# or
kubectl get po # you'll see 'ErrImagePull'

Return the deployment to the second revision (number 2) and verify the image is nginx:1.7.9

show

kubectl rollout undo deploy nginx --to-revision=2
kubectl describe deploy nginx | grep Image:
kubectl rollout status deploy nginx # Everything should be OK

Check the details of the third revision (number 3)

show

kubectl rollout history deploy nginx --revision=3 # You'll also see the wrong image displayed here

Scale the deployment to 5 replicas

show

kubectl scale deploy nginx --replicas=5
kubectl get po
kubectl describe deploy nginx

Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80%

show

kubectl autoscale deploy nginx --min=5 --max=10 --cpu-percent=80

Pause the rollout of the deployment

show

kubectl rollout pause deploy nginx

Update the image to nginx:1.9.1 and check that there's nothing going on, since we paused the rollout

show

kubectl set image deploy nginx nginx=nginx:1.9.1
# or
kubectl edit deploy nginx
# change the image to nginx:1.9.1
kubectl rollout history deploy nginx # no new revision

Resume the rollout and check that the nginx:1.9.1 image has been applied

show

kubectl rollout resume deploy nginx
kubectl rollout history deploy nginx
kubectl rollout history deploy nginx --revision=6 # insert the number of your latest revision

Delete the deployment and the horizontal pod autoscaler you created

show

kubectl delete deploy nginx
kubectl delete hpa nginx

Jobs

Create a job with image perl that runs default command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'"

show

kubectl run pi --image=perl --restart=OnFailure -- perl -Mbignum=bpi -wle 'print bpi(2000)'

Wait till it's done, get the output

show

kubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big)
kubectl get po # get the pod name
kubectl logs perl-**** # get the pi numbers
kubectl delete job pi

Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world'

show

kubectl run busybox --image=busybox --restart=OnFailure -- /bin/sh -c 'echo hello;sleep 30;echo world'

Follow the logs for the pod (you'll wait for 30 seconds)

show

kubectl get po # find the job pod
kubectl logs busybox-ptx58 -f # follow the logs

See the status of the job, describe it and see the logs

show

kubectl get jobs
kubectl describe jobs busybox
kubectl logs job/busybox

Delete the job

show

kubectl delete job busybox

Create the same job, make it run 5 times, one after the other. Verify its status and delete it

show

kubectl run busybox --image=busybox --restart=OnFailure --dry-run -o yaml -- /bin/sh -c 'echo hello;sleep 30;echo world' > job.yaml
vi job.yaml

Add job.cpec.completions=5

apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  completions: 5 # add this line
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: busybox
    spec:
      containers:
      - args:
        - /bin/sh
        - -c
        - echo hello;sleep 30;echo world
        image: busybox
        name: busybox
        resources: {}
      restartPolicy: OnFailure
status: {}
kubectl create -f job.yaml

Verify that it has been completed:

kubectl get job busybox -w # will take two and a half minutes
kubectl delete jobs busybox

Create the same job, but make it run 5 parallel times

show

vi job.yaml

Add job.spec.parallelism=5

apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  parallelism: 5 # add this line
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: busybox
    spec:
      containers:
      - args:
        - /bin/sh
        - -c
        - echo hello;sleep 30;echo world
        image: busybox
        name: busybox
        resources: {}
      restartPolicy: OnFailure
status: {}
kubectl create -f job.yaml
kubectl get jobs

It will take some time for the parallel jobs to finish (>= 30 seconds)

kubectl delete job busybox

Cron jobs

Create a cron job with image busybox that runs on a schedule of "*/1 * * * *" and writes 'date; echo Hello from the Kubernetes cluster' to standard output

show

kubectl run busybox --image=busybox --restart=OnFailure --schedule="*/1 * * * *" -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster'

See its logs and delete it

show

kubectl get cj
kubectl get jobs --watch
kubectl get po --show-labels # observe that the pods have a label that mentions their 'parent' job
kubect logs busybox-1529745840-m867r
# Bear in mind that Kubernetes will run a new job/pod for each new cron job
kubectl delete cj busybox

Configuration (18%)

ConfigMaps

Create a configmap named config with values foo=lala,foo2=lolo

show

kubectl create configmap config --from-literal=foo=lala --from-literal=foo2=lolo

Display its values

show

kubectl get cm config -o yaml --export
# or
kubectl describe cm config

Create and display a configmap from a file

Create the file with

echo -e "foo3=lili\nfoo4=lele" > config.txt
show

kubectl create cm configmap2 --from-file=config.txt
kubectl get cm configmap2 -o yaml --export

Create and display a configmap from a .env file

Create the file with the command

echo -e "var1=val1\n# this is a comment\n\nvar2=val2\n#anothercomment" > config.env
show

kubectl create cm configmap3 --from-file=config.txt
kubectl get cm configmap2 -o yaml --export

Create and display a configmap from a file, giving the key 'special'

Create the file with

echo -e "var3=val3\nvar4=val4" > config3.txt
show

kubectl create cm configmap4 --from-file=special=config2.txt
kubectl describe cm configmap4
kubectl get cm configmap4 -o --export

Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'

show

kubectl create cm options --from-literal=var5=val5
kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    env:
    - name: option # name of the env variable
      valueFrom:
        configMapKeyRef:
          name: options # name of config map
          key: var5 # name of the entity in config map
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl exec -it nginx -- env | grep option # will show 'option=val5'

Create a configMap 'anotherone' with values 'var6=val6', 'var7=val7'. Load this configMap as env variables into a new nginx pod

show

kubectl create configmap anotherone --from-literal=var6=val6 --from-literal=var7=val7
kubectl run --restart=Never nginx --image=nginx -o yaml --dry-run > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    envFrom: # different than previous one, that was 'env'
    - configMapRef: # different from the previous one, was 'configMapKeyRef'
        name: anotherone # the name of the config map
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl exec -it nginx -- env 

Create a configMap 'cmvolume' with values 'var8=val8', 'var9=val9'. Load this as a volume inside an nginx pod on path '/etc/lala'. Create the pod and 'ls' into the '/etc/lala' directory.

show

kubectl create configmap cmvolume --from-literal=var8=val8 --from-literal=var9=val9
kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  volumes: # add a volumes list
  - name: myvolume # just a name, you'll reference this in the pods
    configMap:
      name: cmvolume # name of your configmap
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    volumeMounts: # your volume mounts are listed here
    - name: myvolume # the name that you specified in pod.spec.volumes.name
      mountPath: /etc/lala # the path inside your container
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl exec -it nginx -- /bin/sh
cd /etc/lala
ls # will show var8 var9
cat var8 # will show val8

SecurityContext

Create the YAML for an nginx pod that runs with the UID 101. No need to create the pod

show

kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  securityContext: # insert this line
    runAsUser: 101 # UID for the user
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

Create the YAML for an nginx pod that has the capabilities "NET_ADMIN", "SYS_TIME" added on its single container

show

kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    securityContext: # insert this line
      capabilities: # and this
        add: ["NET_ADMIN", "SYS_TIME"] # this as well
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

Requests and limits

Create an nginx pod with requests cpu=100m,memory=256Mi and limits cpu=200m,memory=512Mi

show

kubectl run nginx --image=nginx --restart=Never --requests='cpu=100m,memory=256Mi' --limits='cpu=200m,memory=512Mi'

Secrets

Create a secret called mysecret with the values password=mypass

show

kubectl create secret generic mysecret --from-literal=password=mypass

Create a secret called mysecret2 that gets key/value from a file

Create a file called username with the value admin:

echo admin > username
show

kubectl create secret generic mysecret2 --from-file=username

Get the value of mysecret2

show

kubectl get secret mysecret2 -o yaml --export
echo YWRtaW4K | base64 -d # shows 'admin'

Create an nginx pod that mounts the secret mysecret2 in a volume on path /etc/foo

show

kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  volumes: # specify the volumes
  - name: foo # this name will be used for reference inside the container
    secret: # we want a secret
      secretName: mysecret2 # name of the secret - this must already exist on pod creation
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    volumeMounts: # our volume mounts
    - name: foo # name on pod.spec.volumes
      mountPath: /etc/foo #our mount path
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl exec -it nginx /bin/bash
ls /etc/foo  # shows username
cat /etc/foo/username # shows admin

Delete the pod you just created and mount the variable 'username' from secret mysecret2 onto a new nginx pod in env variable called 'USERNAME'

show

kubectl delete po nginx
kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    env: # our env variables
    - name: USERNAME # asked name
      valueFrom:
        secretKeyRef: # secret reference
          name: mysecret2 # our secret's name
          key: username # the key of the data in the secret
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl exec -it nginx -- env | grep USERNAME | cut -d '=' -f 2 # will show 'admin'

ServiceAccounts

See all the service accounts of the cluster in all namespaces

show

kubectl get sa --all-namespaces

Create a new serviceaccount called 'myuser'

show

kubectl create sa 'myuser' --dry-run -o yaml

Alternatively:

# let's get a template easily
kubectl get sa default -o yaml --export > sa.yaml
vim sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: myuser
kubectl create -f sa.yaml

Create an nginx pod that uses 'myuser' as a service account

show

kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  serviceAccountName: myuser # we use pod.spec.serviceAccountName
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl describe pod nginx # will see that a new secret called myuser-token-***** has been mounted

or you can add directly with kubectl run command:

kubectl run nginx --image=nginx --restart=Never --serviceaccount=myuser -o yaml --dry-run > pod.yaml
kubectl apply -f pod.yaml

Observability (18%)

Liveness and readiness probes

Create an nginx pod with a liveness probe that just runs the command 'ls'. Save its YAML in pod.yaml. Run it, check its probe status, delete it.

show

kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    livenessProbe: # our probe
      exec: # add this line
        command: # command definition
        - ls # ls command
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl describe pod nginx | grep -i liveness # run this to see that liveness probe works
kubectl delete -f pod.yaml

Modify the pod.yaml file so that liveness probe starts kicking in after 5 seconds whereas the period of probing would be 10 seconds. Run it, check the probe, delete it.

show

kubectl explain pod.spec.containers.livenessProbe # get the exact names
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    livenessProbe: 
      initialDelaySeconds: 5 # add this line
      periodSeconds: 10 # add this line as well
      exec:
        command:
        - ls
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl describe po nginx | grep -i liveness
kubectl delete -f pod.yaml

Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '/' on port 80. Again, run it, check the readinessProbe, delete it.

show

kubectl run nginx --image=nginx --dry-run -o yaml --restart=Never --port=80 > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    resources: {}
    ports:
      - containerPort: 80
    readinessProbe: # declare the readiness probe
      httpGet: # add this line
        path: / #
        port: 80 #
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
kubectl create -f pod.yaml
kubectl describe pod nginx | grep -i readiness # to see the pod readiness details
kubectl delete -f pod.yaml

Logging

Create a busybox pod that runs 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'. Check its logs

show

kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'
kubectl logs busybox -f # follow the logs

Debugging

Create a busybox pod that runs 'ls /notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod

show

kubectl run busybox --restart=Never --image=busybox -- /bin/sh -c 'ls /notexist'
# show that there's an error
kubectl k logs bulogs busybox
kubectl describe po busybox
kubectl delete po busybox

Create a busybox pod that runs 'notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod forcefully with a 0 grace period

show

kubectl run busybox --restart=Never --image=busybox -- notexist
kubectl logs busybox # will bring nothing! container never started
kubectl describe po busybox # in the events section, you'll see the error
# also...
kubectl get events | grep -i error # you'll see the error here as well
kubectl delete po busybox --force --grace-period=0

Get CPU/memory utilization for nodes (heapster must be running)

show

kubectl top nodes

# Services and Networking (13%)

Create a pod with image nginx called nginx and expose its port 80

show

kubectl run nginx --image=nginx --restart=Never --port=80 --expose
# observer that a pod as well as a service are created

Confirm that ClusterIP has been created. Also check endpoints

show

kubectl get svc nginx # services
kubectl get ep # endpoints

Get pod's ClusterIP, create a temp busybox pod and 'hit' that IP with wget

show

kubectl get svc nginx # get the IP (something like 10.108.93.130)
kubectl run busybox --rm --image=busybox -it --restart=Never -- sh
wget -O- IP:80
exit

Convert the ClusterIP to NodePort and find the NodePort port. Hit it using Node's IP. Delete the service and the pod

show

kubectl edit svc nginx
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-06-25T07:55:16Z
  name: nginx
  namespace: default
  resourceVersion: "93442"
  selfLink: /api/v1/namespaces/default/services/nginx
  uid: 191e3dac-784d-11e8-86b1-00155d9f663c
spec:
  clusterIP: 10.97.242.220
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  sessionAffinity: None
  type: NodePort # change cluster IP to nodeport
status:
  loadBalancer: {}
kubectl get svc
# result:
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        1d
nginx        NodePort    10.107.253.138   <none>        80:31931/TCP   3m
wget -O- NODE_IP:31931 # if you're using Kubernetes with Docker for Windows/Mac, try 127.0.0.1

Create a deployment called foo using image 'dgkanatsios/simpleapp' (a simple server that returns hostname) and 3 replicas. Label it as 'app=foo'. Declare that containers in this pod will accept traffic on port 8080 (do NOT create a service yet)

show

kubectl run foo --image=dgkanatsios/simpleapp --labels=app=foo --port=8080 --replicas=3

Get the pod IPs. Create a temp busybox pod and trying hitting them on port 8080

show

kubectl get pods -l app=foo -o wide # 'wide' will show pod IPs
kubectl run busybox --image=busybox --restart=Never -it --rm -- sh
wget -O- POD_IP:8080 # do not try with pod name, will not work
# try hitting all IPs to confirm that hostname is different
exit

Create a service that exposes the deployment on port 6262. Verify its existence, check the endpoints

show

kubectl expose deploy foo --port=6262 --target-port=8080
kubectl get service foo # you will see ClusterIP as well as port 6262
kubectl get endpoint foo # you will see the IPs of the three replica nodes, listening on port 8080

Create a temp busybox pod and connect via wget to foo service. Verify that each time there's a different hostname returned. Delete deployment and services to cleanup the cluster

show

kubectl get svc # get the foo service ClusterIP
kubectl run busybox --image=busybox -it --rm --restart=Never -- sh
wget -O- foo:6262 # DNS works! run it many times, you'll see different pods responding
wget -O- SERVICE_CLUSTER_IP:6262 # ClusterIP works as well
# you can also kubectl logs on deployment pods to see the container logs
kubectl delete svc foo
kubectl delete deploy foo

Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: true' can access the deployment and apply it

show

kubectl run nginx --image=nginx --replicas=2 --port=80 --expose
kubectl describe svc nginx # see the 'run=nginx' selector for the pods
# or
kubectl get svc nginx -o yaml --export

vi policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-nginx # pick a name
spec:
  podSelector:
    matchLabels:
      run: nginx # selector for the pods
  ingress: # allow ingress traffic
  - from:
    - podSelector: # from pods
        matchLabels: # with this label
          access: 'true' # 'true' *needs* quotes in YAML, apparently

# State Persistence (8%)

Define volumes

Create busybox pod with two containers, each one will have the image busybox and will run the 'sleep 3600' command. Make both pods mount an emptyDir at '/etc/foo'. Connect to the second busybox, write the first column of '/etc/passwd' file to '/etc/foo/passwd'. Connect to the first busybox and write '/etc/foo/passwd' file to standard output. Delete pod.

show

This question is probably a better fit for the 'Multi-container-pods' section but I'm keeping it here as it will help you get acquainted with state

Easiest way to do this is to create a template pod with:

kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run -- /bin/sh -c 'sleep 3600' > pod.yaml
vi pod.yaml

Copy paste the container definition and type the lines that have a comment in the end:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  dnsPolicy: ClusterFirst
  restartPolicy: Never
  containers:
  - args:
    - /bin/sh
    - -c
    - sleep 3600
    image: busybox
    imagePullPolicy: IfNotPresent
    name: busybox
    resources: {}
    volumeMounts: #
    - name: myvolume #
      mountPath: /etc/foo #
  - args:
    - /bin/sh
    - -c
    - sleep 3600
    image: busybox
    name: busybox2 # don't forget to change the name during copy paste, must be different from the first container's name!
    volumeMounts: #
    - name: myvolume #
      mountPath: /etc/foo #
  volumes: #
  - name: myvolume #
    emptyDir: {} #
status: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never

Connect to the second container:

kubectl exec -it busybox -c busybox2 -- /bin/sh
cat /etc/passwd | cut -f 1 -d ':' > /etc/foo/passwd 
cat /etc/foo/passwd # confirm that stuff has been written successfully
exit

Connect to the first container:

kubectl exec -it busybox -c busybox -- /bin/sh
mount | grep foo # confirm the mounting
cat /etc/foo/passwd
exit
kubectl delete po busybox

Create a PersistentVolume of 20GB, called 'myvolume'. Make it have accessMode of 'ReadWriteOnce' and 'ReadWriteMany', storageClassName 'normal', mounted on hostPath '/etc/foo'. Save it on pv.yaml, add it to the cluster. Show the PersistentVolumes that exist on the cluster

show

vi pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: myvolume
spec:
  storageClassName: normal
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
    - ReadWriteMany
  hostPath:
    path: /etc/foo

Show the PersistentVolumes:

kubectl create -f pv.yaml
# will have status 'Available'
kubectl get pv

Create a PersistentVolumeClaim for this storage class, called mypvc, a request of 4GB and save it on pvc.yaml. Create it on the cluster. Show the PersistentVolumeClaims of the cluster. Show the PersistentVolumes of the cluster

show

vi pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mypvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi

Show the PersistentVolumeClaims and PersistentVolumes:

kubectl get pvc # will show as 'Bound'
kubectl get pv # will show as 'Bound' as well

Create a busybox pod with command 'sleep 3600', save it on pod.yaml. Mount the PersistentVolumeClaim to '/etc/foo'. Connect to the 'busybox' pod, and copy the '/etc/passwd' file to '/etc/foo'

show

Create a skeleton pod:

kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run -- /bin/sh -c 'sleep 3600' > pod.yaml
vi pod.yaml

Add the lines that finish with a comment:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - args:
    - /bin/sh
    - -c
    - sleep 3600
    image: busybox
    imagePullPolicy: IfNotPresent
    name: busybox
    resources: {}
    volumeMounts: #
    - name: myvolume #
      mountPath: /etc/foo #
  dnsPolicy: ClusterFirst
  restartPolicy: Never
  volumes: #
  - name: myvolume #
    persistentVolumeClaim: #
      claimName: mypvc #
status: {}

Create the pod:

kubectl create -f pod.yaml

Connect to the pod and copy '/etc/passwd' to '/etc/foo/passwd':

kubectl exec busybox -it -- cp /etc/passwd /etc/foo/passwd

Create a second pod which is identical with the one you just created (you can easily do it by changing the 'name' property on pod.yaml). Connect to it and verify that '/etc/foo' contains the 'passwd' file. Delete pods to cleanup

show

Create the second pod, called busybox2:

vim pod.yaml
# change 'name: busybox' to 'name: busybox2'
kubectl create -f pod.yaml
kubectl exec busybox2 -- ls /etc/foo # will show 'passwd'
# cleanup
kubectl delete po busybox busybox2

Create a busybox pod with 'sleep 3600' as arguments. Copy '/etc/passwd' from the pod to your local folder

show

kubectl run busybox --image=busybox --restart=Never -- sleep 3600
kubectl cp busybox:/etc/passwd . # kubectl cp command
cat passwd

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment