You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
singple pod of image httpd:2.4.410-alpine in ns default Pod name is pod1 and container is pod1-container pod scheduled on master node no new label to be added
alias ksn='k config set-context --current --namespace'
ksn default #changed to default ns
alias kr='k run --dry-run=client -o yaml --image'
# get master node name via k get nodes and then add the nodename where you want to run the pod replace container name with pod-conatiner1
kr httpd:2.4.41-alpine pod1 | sed 's/ name: pod1/ name: pod1-container/g' | sed -e "s/spec:/spec:\n nodename: $(echo $(k get nodes | grep master | awk '{ print $1 }'))/g" > 2.yaml
alias kaf='k apply -f'
kaf 2.yaml
alias kgp='k get pod'
kgp
kgp -o wide
use kubectl config use-context k8s-c1-H 2 pods in namespace project-c13 o3db-* scale down pod to 1 replica
k config use-context k8s-c1-H
ksn k8s-c1-H
k get sts # sts is statefulsets currently set to not 1
k scale sts o3db --replicas 1
k get sts # now shows 1
k config use-context k8s-c1-H
ksn default
kr nginx:1.16.1-alpine ready-if-service-ready > 4.yaml
# add the following for liveness and readiness probe
livenessProbe:
exec:
command:
- echo
- "true"
readinessProbe:
exec:
command:
- /bin/sh
- -c
- "curl http://service-am-i-ready:80 "
k get svc # to get all service
k describe svc service-am-i-ready
alias kt='k run -it --rm tmp --image=nginx:alpine --restart=Never '
alias > a
kt -- wget -T2 -0- http://service-am-i-ready:80
kr nginx:1.16.1-alpine cross-server-ready --labels=id=cross-server-ready > 4.1.yaml
kaf 4.1.yml
if you want to do 1 line
then
alias ka='k apply '
kr nginx:1.16.1-alpine cross-server-ready --labels=id=cross-server-ready | ka -
↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
This is one liner
kubectl config use-context k8s-c1-H
alias kgpsbts='k get pods -A sort-by=.metadata.creationTimestamp'
echo 'kubectl get pods -A sort-by=.metadata.creationTimestamp' > /opt/course/5/find_pods.sh
echo 'kubectl get pods -A sort-by=.metadata.uid' > /opt/course/5/find_pods_uid.sh
sh /opt/course/5/find_pods.sh
sh /opt/course/5/find_pods_uid.sh
kubectl config use-context k8s-c1-H
k get nodes
echo "k top get nodes" > /opt/course/7/node.sh
echo "k top get nodes --containers=true" > /opt/course/7/pod.sh
sh /opt/course/7/node.sh
sh /opt/course/7/pod.sh
kubectl config use-context k8s-c1-H
ksn project-tiger
alias kcd='k create deployment -n project-tiger deploy-important --labels=id=very-important -o yaml --image'
kcd nginx:1.17.6-alpine > 12.yaml
k get nodes
k describe node ####MASTER
following changes are needed
1. replicas:3
2. template.metadata.lables add id: very-important
3. Add new image in containers with the following
- image: kubernetes/pause
name: container2
4. add affinity details for pod to node under spec.template.spec
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpression:
- key: id
operator: In
values:
- very-important
topologyKey: kubernetes.io/hostname
kaf 12.yaml
k get deployment deploy-important
kubectl config use-context k8s-c3-CCC
kr nginx:1.16-alpine my-static-pod -n default > 21.yaml
add in resources
resources:
request:
cpu: 10m
memory: 20Mi
add
nodeName: cluster3-master1
THEN RUN THE FOLLOWING
kaf 21.yaml
k expose pod my-static-pod-cluster3-master1 --port=80 -n default --name=static-pod-service --type=NodePort
k get svc
k desc svc static-pod-service
k get nodes -o wide
use the internal ip to connect
curl 192.168.100.31:31716
export now="--force --grace-period 0" # k delete pod x $now
set tabstop=
set expandtab
set shiftwidth=
k get ns > /opt/course/1/namespaces
# /opt/course/1/namespaces
NAME STATUS AGE
default Active 150m
earth Active 76m
jupiter Active 76m
kube-public Active 150m
kube-system Active 150m
mars Active 76m
mercury Active 76m
moon Active 76m
neptune Active 76m
pluto Active 76m
saturn Active 76m
shell-intern Active 76m
sun Active 76m
venus Active 76m
Your manager would like to run a command manually on occasion to output the status of that exact Pod. Please write a command that does
this into /opt/course/2/pod1-status-command.sh. The command should use kubectl.
Answer:
Change the container name in 2.yaml to pod1-container:
Then run:
Next create the requested command:
The content of the command file could look like:
Another solution would be using jsonpath:
To test the command:
Question 3 | Job
Task weight: 2%
Team Neptune needs a Job template located at /opt/course/3/job.yaml. This Job should run image busybox:1.31.0 and execute sleep 2
&& echo done. It should be in namespace neptune, run a total of 3 times and should execute 2 runs in parallel.
Start the Job and check its history. Each pod created by the Job should have the label id: awesome-job. The job should be named neb-new-
job and the container neb-new-job-container.
k run # help
# check the export on the very top of this document so we can use $do
k run pod1 --image=httpd:2.4.41-alpine $do > 2 .yaml
vim 2 .yaml
# 2.yaml
apiVersion: v
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod
name: pod
spec:
containers:
# /opt/course/2/pod1-status-command.sh
kubectl -n default get pod pod1 -o jsonpath="{.status.phase}"
➜ sh /opt/course/2/pod1-status-command.sh
Running
Answer:
Make the required changes in the yaml:
Then to create it:
Check Job and Pods , you should see two running parallel at most but three in total:
Check history:
k -n neptun create job -h
# check the export on the very top of this document so we can use $do
k -n neptune create job neb-new-job --image=busybox:1.31.0 $do > /opt/course/3/job.yaml -- sh -c "sleep 2 && echo done"
➜ k -n neptune describe job neb-new-job
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-jhq2g
Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-vf6ts
Normal SuccessfulCreate 2m42s job-controller Created pod: neb-new-job-gm8sz
At the age column we can see that two pods run parallel and the third one after that. Just as it was required in the task.
Question 4 | Helm Management
Task weight: 5%
Team Mercury asked you to perform some operations using Helm, all in Namespace mercury:
1. Delete release internal-issue-report-apiv
2. Upgrade release internal-issue-report-apiv2 to any newer version of chart bitnami/nginx available
3. Install a new release internal-issue-report-apache of chart bitnami/apache. The Deployment should have two replicas, set these via
Helm-values during install
4. There seems to be a broken release, stuck in pending-install state. Find it and delete it
Answer:
Helm Chart : Kubernetes YAML template-files combined into a single package, Values allow customisation
Helm Release : Installed instance of a Chart
Helm Values : Allow to customise the YAML template-files in a Chart when creating a Release
1.
First we should delete the required release:
2.
Next we need to upgrade a release, for this we could first list the charts of the repo:
Here we see that a newer chart version 9.5.2 is available. But the task only requires us to upgrade to any newer chart version available, so
we can simply run:
➜ helm -n mercury ls
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apiv1 mercury deployed nginx-9.5.0 1.21.
internal-issue-report-apiv2 mercury deployed nginx-9.5.0 1.21.
internal-issue-report-app mercury deployed nginx-9.5.0 1.21.
➜ helm -n mercury ls
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apiv2 mercury deployed nginx-9.5.0 1.21.
internal-issue-report-app mercury deployed nginx-9.5.0 1.21.
➜ helm repo list
NAME URL
bitnami https://charts.bitnami.com/bitnami
➜ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
➜ helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/nginx 9.5.2 1.21.1 Chart for the nginx server ...
➜ helm -n mercury upgrade internal-issue-report-apiv2 bitnami/nginx
Release "internal-issue-report-apiv2" has been upgraded. Happy Helming!
NAME: internal-issue-report-apiv
LAST DEPLOYED: Tue Aug 31 17:40:42 2021
NAMESPACE: mercury
STATUS: deployed
REVISION: 2
TEST SUITE: None
...
➜ helm -n mercury ls
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apiv2 mercury deployed nginx-9.5.2 1.21.
internal-issue-report-app mercury deployed nginx-9.5.0 1.21.
Looking good!
INFO: Also check out helm rollback for undoing a helm rollout/upgrade
3.
Now we're asked to install a new release, with a customised values setting. For this we first list all possible value settings for the chart, we can
do this via:
Huge list, if we search in it we should find the setting replicaCount: 1 on top level. This means we can run:
If we would also need to set a value on a deeper level, for example image.debug, we could run:
Install done, let's verify what we did:
We see a healthy deployment with two replicas!
4.
By default releases in pending-upgrade state aren't listed, but we can show all to find and delete the broken release:
Thank you Helm for making our lifes easier! (Till something breaks)
Question 5 | ServiceAccount, Secret
Task weight: 3%
Team Neptune has its own ServiceAccount named neptune-sa-v2 in Namespace neptune. A coworker needs the token from the Secret that
belongs to that ServiceAccount. Write the base64 decoded token to file /opt/course/5/token.
Answer:
Since K8s 1.24, Secrets won't be created automatically for ServiceAccounts any longer. But it's still possible to create a Secret manually and
attach it to a ServiceAccount by setting the correct annotation on the Secret. This was done for this task.
helm show values bitnami/apache # will show a long list of all possible value-settings
helm show values bitnami/apache | yq e # parse yaml and show with colors
➜ helm -n mercury install internal-issue-report-apache bitnami/apache --set replicaCount=
NAME: internal-issue-report-apache
LAST DEPLOYED: Tue Aug 31 17:57:23 2021
NAMESPACE: mercury
STATUS: deployed
REVISION: 1
TEST SUITE: None
...
If a Secret belongs to a ServiceAccont , it'll have the annotation kubernetes.io/service-account.name. Here the Secret we're looking for is
neptune-secret-1.
This shows the base64 encoded token. To get the encoded one we could pipe it manually through base64 -d or we simply do:
Copy the token (part under token:) and paste it using vim.
File /opt/course/5/token should contain the token:
Question 6 | ReadinessProbe
Task weight: 7%
Create a single Pod named pod6 in Namespace default of image busybox:1.31.0. The Pod should have a readiness-probe executing cat
/tmp/ready. It should initially wait 5 and periodically wait 10 seconds. This will set the container ready only if the file /tmp/ready exists.
The Pod should run the command touch /tmp/ready && sleep 1d, which will create the necessary file to be ready and then idles. Create the
Pod and confirm it starts.
k -n neptune get sa # get overview
k -n neptune get secrets # shows all secrets of namespace
k -n neptune get secrets -oyaml | grep annotations -A 1 # shows secrets with first annotation
➜ k -n neptune get secret neptune-secret-1 -o yaml
apiVersion: v
data:
...
token:
ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltNWFaRmRxWkRKMmFHTnZRM0JxV0haT1IxZzFiM3BJY201SlowaEhOV3hUWmt3elFuRmFhVEZhZDJNaWZ
RLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOX
VZVzFsYzNCaFkyVWlPaUp1WlhCMGRXNWxJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbTVsY
0hSMWJtVXRjMkV0ZGpJdGRHOXJaVzR0Wm5FNU1tb2lMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIz
VnVkQzV1WVcxbElqb2libVZ3ZEhWdVpTMXpZUzEyTWlJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk
2ZFc1MExuVnBaQ0k2SWpZMlltUmpOak0yTFRKbFl6TXROREpoWkMwNE9HRTFMV0ZoWXpGbFpqWmxPVFpsTlNJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMm
FXTmxZV05qYjNWdWREcHVaWEIwZFc1bE9tNWxjSFIxYm1VdGMyRXRkaklpZlEuVllnYm9NNENUZDBwZENKNzh3alV3bXRhbGgtMnZzS2pBTnlQc2gtNmd1R
XdPdFdFcTVGYnc1WkhQdHZBZHJMbFB6cE9IRWJBZTRlVU05NUJSR1diWUlkd2p1Tjk1SjBENFJORmtWVXQ0OHR3b2FrUlY3aC1hUHV3c1FYSGhaWnp5NHlp
bUZIRzlVZm1zazVZcjRSVmNHNm4xMzd5LUZIMDhLOHpaaklQQXNLRHFOQlF0eGctbFp2d1ZNaTZ2aUlocnJ6QVFzME1CT1Y4Mk9KWUd5Mm8tV1FWYzBVVWF
uQ2Y5NFkzZ1QwWVRpcVF2Y3pZTXM2bno5dXQtWGd3aXRyQlk2VGo5QmdQcHJBOWtfajVxRXhfTFVVWlVwUEFpRU43T3pka0pzSThjdHRoMTBseXBJMUFlRn
I0M3Q2QUx5clFvQk0zOWFiRGZxM0Zrc1Itb2NfV
kind: Secret
...
➜ k get pod pod
NAME READY STATUS RESTARTS AGE
pod6 0/1 ContainerCreating 0 2s
➜ k get pod pod
NAME READY STATUS RESTARTS AGE
pod6 0/1 Running 0 7s
➜ k get pod pod
NAME READY STATUS RESTARTS AGE
pod6 1/1 Running 0 15s
The Pod names don't reveal any information. We assume the Pod we are searching has a label or annotation with the name my-happy-shop,
so we search for it:
We see the webserver we're looking for is webserver-sat-
Change the Namespace to neptune, also remove the status: section, the token volume, the token volumeMount and the nodeName, else
the new Pod won't start. The final file could look as clean like this:
Then we execute:
It seems the server is running in Namespace neptune, so we can do:
Let's confirm only one is running:
This should list only one pod called webserver-sat-003 in Namespace neptune, status running.
Question 8 | Deployment, Rollouts
Task weight: 4%
There is an existing Deployment named api-new-c32 in Namespace neptune. A developer did make an update to the Deployment but the
updated version never came online. Check the Deployment history and find a revision that works, then rollback to it. Could you tell Team
Neptune what the error was so it doesn't happen again?
Answer:
➜ k -n saturn get pod
NAME READY STATUS RESTARTS AGE
webserver-sat-001 1/1 Running 0 111m
webserver-sat-002 1/1 Running 0 111m
webserver-sat-003 1/1 Running 0 111m
webserver-sat-004 1/1 Running 0 111m
webserver-sat-005 1/1 Running 0 111m
webserver-sat-006 1/1 Running 0 111m
k -n saturn describe pod # describe all pods, then manually look for it
# or do some filtering like this
k -n saturn get pod -o yaml | grep my-happy-shop -A
k -n saturn get pod webserver-sat-003 -o yaml > 7_webserver-sat-003.yaml # export
vim 7_webserver-sat-003.yaml
# 7_webserver-sat-003.yaml
apiVersion: v
kind: Pod
metadata:
annotations:
description: this is the server for the E-Commerce System my-happy-shop
labels:
id: webserver-sat-
name: webserver-sat-
namespace: neptune # new namespace here
spec:
containers:
➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i error
... Error: ImagePullBackOff
➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i image
Image: ngnix:1.16.
Image ID:
Reason: ImagePullBackOff
Warning Failed 4m28s (x616 over 144m) kubelet, gke-s3ef67020-28c5-45f7--default-pool-248abd4f-s010 Error:
ImagePullBackOff
k -n neptune rollout undo deploy api-new-c
➜ k -n neptune get deploy api-new-c
NAME READY UP-TO-DATE AVAILABLE AGE
api-new-c32 3/3 3 3 146m
k -n neptune get rs -o wide | grep api-new-c
cp /opt/course/9/holy-api-pod.yaml /opt/course/9/holy-api-deployment.yaml # make a copy!
vim /opt/course/9/holy-api-deployment.yaml
# /opt/course/9/holy-api-deployment.yaml
To indent multiple lines using vim you should set the shiftwidth using :set shiftwidth=2. Then mark multiple lines using Shift v and the
up/down keys.
To then indent the marked lines press > or < and to repeat the action press.
Next create the new Deployment :
and confirm it's running:
Finally delete the single Pod :
Question 10 | Service, Logs
Task weight: 4%
Team Pluto needs a new cluster internal Service. Create a ClusterIP Service named project-plt-6cc-svc in Namespace pluto. This Service
should expose a single Pod named project-plt-6cc-api of image nginx:1.17.3-alpine, create that Pod as well. The Pod should be
identified by label project: plt-6cc-api. The Service should use tcp port redirection of 3333:80.
apiVersion: apps/v
kind: Deployment
metadata:
name: holy-api # name stays the same
namespace: pluto # important
spec:
replicas: 3 # 3 replicas
selector:
matchLabels:
id: holy-api # set the correct selector
template:
# => from here down its the same as the pods metadata: and spec: sections
metadata:
labels:
id: holy-api
name: holy-api
spec:
containers:
k -n pluto expose pod project-plt-6cc-api --name project-plt-6cc-svc --port 3333 --target-port 80
apiVersion: v
kind: Service
metadata:
creationTimestamp: null
labels:
project: plt-6cc-api
name: project-plt-6cc-svc # good
namespace: pluto # great
spec:
ports:
k -n pluto create service -h # help
k -n pluto create service clusterip -h #help
k -n pluto create service clusterip project-plt-6cc-svc --tcp 3333 :80 $do
# now we would need to set the correct selector labels
➜ k -n pluto get pod,svc | grep 6cc
pod/project-plt-6cc-api 1/1 Running 0 9m42s
Yes, endpoint there! Finally we check the connection using a temporary Pod :
Great! Notice that we use the Kubernetes Namespace dns resolving (project-plt-6cc-svc.pluto) here. We could only use the Service name if
we would also spin up the temporary Pod in Namespace pluto.
And now really finally copy or pipe the html content into /opt/course/10/service_test.html.
Also the requested logs:
Question 11 | Working with Containers
Task weight: 7%
During the last monthly meeting you mentioned your strong expertise in container technology. Now the Build&Release team of department
Sun is in need of your insight knowledge. There are files to build a container image located at /opt/course/11/image. The container will run
a Golang application which outputs information to stdout. You're asked to perform the following tasks:
NOTE: Make sure to run all commands as user k8s, for docker use sudo docker
1. Change the Dockerfile. The value of the environment variable SUN_CIPHER_ID should be set to the hardcoded value 5b9c1065-e39d-
4a43-a04a-e59bcea3e03f
2. Build the image using Docker, named registry.killer.sh:5000/sun-cipher, tagged as latest and v1-docker, push these to the
registry
3. Build the image using Podman, named registry.killer.sh:5000/sun-cipher, tagged as v1-podman, push it to the registry
4. Run a container using Podman, which keeps running in the background, named sun-cipher using image
registry.killer.sh:5000/sun-cipher:v1-podman. Run the container from k8s@terminal and not root@terminal
5. Write the logs your container sun-cipher produced into /opt/course/11/logs. Then write a list of all running Podman containers into
/opt/course/11/containers
➜ k -n pluto get ep
NAME ENDPOINTS AGE
project-plt-6cc-svc 10.28.2.32:80 84m
➜ k run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://project-plt-6cc-svc.pluto:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 32210 0 --:--:-- --:--:-- --:--:-- 32210
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
...
Dockerfile : list of commands from which an Image can be build
Image : binary file which includes all data/requirements to be run as a Container
Container : running instance of an Image
Registry : place where we can push/pull Images to/from
1.
First we need to change the Dockerfile to:
2.
Then we build the image using Docker:
There we go, built and pushed.
3.
Next we build the image using Podman. Here it's only required to create one tag. The usage of Podman is very similar (for most cases even
identical) to Docker:
# build container stage 1
FROM docker.io/library/golang:1.15.15-alpine3.
WORKDIR /src
COPY..
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o bin/app.
# app container stage 2
FROM docker.io/library/alpine:3.12.
COPY --from=0 /src/bin/app app
# CHANGE NEXT LINE
ENV SUN_CIPHER_ID=5b9c1065-e39d-4a43-a04a-e59bcea3e03f
CMD ["./app"]
➜ sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.killer.sh:5000/sun-cipher latest 409fde3c5bf9 24 seconds ago 7.76MB
registry.killer.sh:5000/sun-cipher v1-docker 409fde3c5bf9 24 seconds ago 7.76MB
...
➜ sudo docker push registry.killer.sh:5000/sun-cipher:latest
The push refers to repository [registry.killer.sh:5000/sun-cipher]
c947fb5eba52: Pushed
33e8713114f8: Pushed
latest: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739
➜ podman logs sun-cipher
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8081
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 7887
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 1847
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4059
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 2081
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 1318
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4425
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 2540
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 456
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 3300
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 694
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8511
2077/03/13 06:50:44 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8162
2077/03/13 06:50:54 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 5089
Now the same for the PersistentVolumeClaim , head to the docs, copy an example and transform it into:
Next we check the status of the PVC :
volumeMounts: # add
name: data # add
mountPath: /tmp/project-data # add
k -f 12_dep.yaml create
➜ k -n earth describe pod project-earthflower-d6887f7c5-pn5wv | grep -A2 Mounts:
Mounts:
/tmp/project-data from data (rw) # there it is
/var/run/secrets/kubernetes.io/serviceaccount from default-token-n2sjj (ro)
# 13_pvc.yaml
apiVersion: v
kind: PersistentVolumeClaim
metadata:
name: moon-pvc-126 # name as requested
namespace: moon # important
spec:
accessModes:
ReadWriteOnce # RWO
resources:
requests:
storage: 3Gi # size
storageClassName: moon-retain # uses our new storage class
k -f 13_pvc.yaml create
➜ k -n moon get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
moon-pvc-126 Pending moon-retain 2m57s
This confirms that the PVC waits for the provisioner moon-retainer to be created. Finally we copy or write the event message into the
requested location:
Question 14 | Secret, Secret-Volume, Secret-Env
Task weight: 4%
You need to make changes on an existing Pod in Namespace moon called secret-handler. Create a new Secret secret1 which contains
user=test and pass=pwd. The Secret 's content should be available in Pod secret-handler as environment variables SECRET1_USER and
SECRET1_PASS. The yaml for Pod secret-handler is available at /opt/course/14/secret-handler.yaml.
There is existing yaml for another Secret at /opt/course/14/secret2.yaml, create this Secret and mount it inside the same Pod at
/tmp/secret2. Your changes should be saved under /opt/course/14/secret-handler-new.yaml. Both Secrets should only be available in
Namespace moon.
Answer
The last command would generate this yaml:
Next we create the second Secret from the given location, making sure it'll be created in Namespace moon:
We will now edit the Pod yaml:
Add the following to the yaml:
➜ k -n moon describe pvc moon-pvc-
Name: moon-pvc-
...
Status: Pending
...
Events:
...
waiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system
administrator
# /opt/course/13/pvc-126-reason
waiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system
administrator
k -n moon get pod # show pods
k -n moon create secret -h # help
k -n moon create secret generic -h # help
k -n moon create secret generic secret1 --from-literal user=test --from-literal pass=pwd
➜ k -n moon exec secret-handler -- cat /tmp/secret2/key
12345678
Team Moonpie has a nginx server Deployment called web-moon in Namespace moon. Someone started configuring it but it was never
completed. To complete please create a ConfigMap called configmap-web-moon-html containing the content of file /opt/course/15/web-
moon.html under the data key-name index.html.
The Deployment web-moon is already configured to work with this ConfigMap and serve its content. Test the nginx configuration for example
using curl from a temporary nginx:alpine Pod.
Answer
Let's check the existing Pods :
Good so far, now let's create the missing ConfigMap :
This should create a ConfigMap with yaml like:
After waiting a bit or deleting/recreating (k -n moon rollout restart deploy web-moon) the Pods we should see:
Looking much better. Finally we check if the nginx returns the correct content:
Then use one IP to test the configuration:
➜ k -n moon get pod
NAME READY STATUS RESTARTS AGE
secret-handler 1/1 Running 0 55m
web-moon-847496c686-2rzj4 0/1 ContainerCreating 0 33s
web-moon-847496c686-9nwwj 0/1 ContainerCreating 0 33s
web-moon-847496c686-cxdbx 0/1 ContainerCreating 0 33s
web-moon-847496c686-hvqlw 0/1 ContainerCreating 0 33s
web-moon-847496c686-tj7ct 0/1 ContainerCreating 0 33s
➜ k -n moon describe pod web-moon-847496c686-2rzj
...
Warning FailedMount 31s (x7 over 63s) kubelet, gke-test-default-pool-ce83a51a-p6s4 MountVolume.SetUp failed for
volume "html-volume" : configmaps "configmap-web-moon-html" not found
k -n moon create configmap -h # help
k -n moon create configmap configmap-web-moon-html --from-file=index.html=/opt/course/15/web-moon.html # important to
set the index.html key
apiVersion: v
data:
index.html: | # notice the key index.html, this will be the filename when mounted
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Web Moon Webpage</title>
</head>
<body>
This is some great content.
</body>
</html>
kind: ConfigMap
metadata:
creationTimestamp: null
name: configmap-web-moon-html
namespace: moon
➜ k -n moon get pod
NAME READY STATUS RESTARTS AGE
secret-handler 1/1 Running 0 59m
web-moon-847496c686-2rzj4 1/1 Running 0 4m28s
web-moon-847496c686-9nwwj 1/1 Running 0 4m28s
web-moon-847496c686-cxdbx 1/1 Running 0 4m28s
web-moon-847496c686-hvqlw 1/1 Running 0 4m28s
web-moon-847496c686-tj7ct 1/1 Running 0 4m28s
k -n moon get pod -o wide # get pod cluster IPs
For debugging or further checks we could find out more about the Pods volume mounts:
And check the mounted folder content:
Here it was important that the file will have the name index.html and not the original one web-moon.html which is controlled through the
ConfigMap data key.
Question 16 | Logging sidecar
Task weight: 6%
The Tech Lead of Mercury2D decided it's time for more logging, to finally fight all these missing data incidents. There is an existing container
named cleaner-con in Deployment cleaner in Namespace mercury. This container mounts a volume and writes logs into a file called
cleaner.log.
The yaml for the existing Deployment is available at /opt/course/16/cleaner.yaml. Persist your changes at /opt/course/16/cleaner-
new.yaml but also make sure the Deployment is running.
Create a sidecar container named logger-con, image busybox:1.31.0 , which mounts the same volume and writes the content of
cleaner.log to stdout, you can use the tail -f command for this. This way it can be picked up by kubectl logs.
Check if the logs of the new container reveal something about the missing data incidents.
Answer
Add a sidecar container which outputs the log file to stdout:
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.44.0.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 80500 0 --:--:-- --:--:-- --:--:-- 157k
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Web Moon Webpage</title>
</head>
<body>
This is some great content.
</body>
➜ k -n moon describe pod web-moon-c77655cc-dc8v4 | grep -A2 Mounts:
Mounts:
/usr/share/nginx/html from html-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rvzcf (ro)
k -n mercury rollout history deploy cleaner
k -n mercury rollout history deploy cleaner --revision 1
k -n mercury rollout history deploy cleaner --revision 2
➜ k -n mercury get pod
NAME READY STATUS RESTARTS AGE
cleaner-86b7758668-9pw6t 2/2 Running 0 6s
cleaner-86b7758668-qgh4v 0/2 Init:0/1 0 1s
➜ k -n mercury get pod
NAME READY STATUS RESTARTS AGE
cleaner-86b7758668-9pw6t 2/2 Running 0 14s
cleaner-86b7758668-qgh4v 2/2 Running 0 9s
➜ k -n mercury logs cleaner-576967576c-cqtgx -c logger-con
init
Wed Sep 11 10:45:44 UTC 2099: remove random file
Wed Sep 11 10:45:45 UTC 2099: remove random file
...
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
check this out!
➜ k -n mars get all
NAME READY STATUS RESTARTS AGE
pod/manager-api-deployment-dbcc6657d-bg2hh 1/1 Running 0 98m
pod/manager-api-deployment-dbcc6657d-f5fv4 1/1 Running 0 98m
pod/manager-api-deployment-dbcc6657d-httjv 1/1 Running 0 98m
pod/manager-api-deployment-dbcc6657d-k98xn 1/1 Running 0 98m
pod/test-init-container-5db7c99857-htx6b 1/1 Running 0 2m19s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/manager-api-deployment 4/4 4 4 98m
deployment.apps/test-init-container 1/1 1 1 2m19s
...
Ok, let's try to connect to one pod directly:
The Pods itself seem to work. Let's investigate the Service a bit:
Endpoint inspection is also possible using:
No endpoints - No good. We check the Service yaml:
Though Pods are usually never created without a Deployment or ReplicaSet , Services always select for Pods directly. This gives great flexibility
because Pods could be created through various customized ways. After saving the new selector we check the Service again for endpoints:
Endpoints - Good! Now we try connecting again:
And we fixed it. Good to know is how to be able to use Kubernetes DNS resolution from a different Namespace. Not necessary, but we could
spin up the temporary Pod in default Namespace :
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444
If you don't see a command prompt, try pressing enter.
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
curl: (28) Connection timed out after 1000 milliseconds
pod "tmp" deleted
pod mars/tmp terminated (Error)
k -n mars get pod -o wide # get cluster IP
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.0.1.14
% Total % Received % Xferd Average Speed Time Time Time Current
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
➜ k -n mars describe service manager-api-svc
Name: manager-api-svc
Namespace: mars
Labels: app=manager-api-svc
...
Endpoints: <none>
...
k -n mars get ep
k -n mars edit service manager-api-svc
# k -n mars edit service manager-api-svc
apiVersion: v1
kind: Service
metadata:
...
labels:
app: manager-api-svc
name: manager-api-svc
namespace: mars
...
spec:
clusterIP: 10.3.244.121
ports:
name: 4444-80
port: 4444
protocol: TCP
targetPort: 80
selector:
#id: manager-api-deployment # wrong selector, needs to point to pod!
id: manager-api-pod
sessionAffinity: None
type: ClusterIP
➜ k -n mars get ep
NAME ENDPOINTS AGE
manager-api-svc 10.0.0.30:80,10.0.1.30:80,10.0.1.31:80 + 1 more... 41m
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 99k 0 --:--:-- --:--:-- --:--:-- 99k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Short manager-api-svc.mars or long manager-api-svc.mars.svc.cluster.local work.
Question 19 | Service ClusterIP->NodePort
Task weight: 3%
In Namespace jupiter you'll find an apache Deployment (with one replica) named jupiter-crew-deploy and a ClusterIP Service called
jupiter-crew-svc which exposes it. Change this service to a NodePort one to make it available on all nodes on port 30100.
Test the NodePort Service using the internal IP of all available nodes and the port 30100 using curl, you can reach the internal node IPs
directly from your main terminal. On which nodes is the Service reachable? On which node is the Pod running?
Answer
First we get an overview:
(Optional) Next we check if the ClusterIP Service actually works:
The Service is working great. Next we change the Service type to NodePort and set the port:
We check if the Service type was updated:
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host:
manager-api-svc
pod "tmp" deleted
pod default/tmp terminated (Error)
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc.mars:4444
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 68000 0 --:--:-- --:--:-- --:--:-- 68000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
➜ k -n jupiter get all
NAME READY STATUS RESTARTS AGE
pod/jupiter-crew-deploy-8cdf99bc9-klwqt 1/1 Running 0 34m
➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000
<html><body><h1>It works!</h1></body></html>
k -n jupiter edit service jupiter-crew-svc
# k -n jupiter edit service jupiter-crew-svc
apiVersion: v1
kind: Service
metadata:
name: jupiter-crew-svc
namespace: jupiter
...
spec:
clusterIP: 10.3.245.70
ports:
(Optional) And we confirm that the service is still reachable internally:
Nice. A NodePort Service kind of lies on top of a ClusterIP one, making the ClusterIP Service reachable on the Node IPs (internal and external).
Next we get the internal IPs of all nodes to check the connectivity:
On which nodes is the Service reachable?
On both, even the controlplane. On which node is the Pod running?
In our case on cluster1-node1, but could be any other worker if more available. Here we hopefully gained some insight into how a NodePort
Service works. Although the Pod is just running on one specific node, the Service makes it available through port 30100 on the internal and
external IP addresses of all nodes. This is at least the common/default behaviour but can depend on cluster configuration.
Question 20 | NetworkPolicy
Task weight: 9%
In Namespace venus you'll find two Deployments named api and frontend. Both Deployments are exposed inside the cluster using Services.
Create a NetworkPolicy named np1 which restricts outgoing tcp connections from Deployment frontend and only allows those going to
Deployment api. Make sure the NetworkPolicy still allows outgoing traffic on UDP/TCP ports 53 for DNS resolution.
Test using: wget http://www.google.com and wget api:2222 from a Pod of Deployment frontend.
Answer
INFO: For learning NetworkPolicies check out https://editor.cilium.io. But you're not allowed to use it during the exam.
First we get an overview:
(Optional) This is not necessary but we could check if the Services are working inside the cluster:
➜ k -n jupiter get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jupiter-crew-svc NodePort 10.3.245.70 <none> 8080:30100/TCP 3m52s
➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
<html><body><h1>It works!</h1></body></html>
➜ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP ...
cluster1-controlplane1 Ready control-plane 18h v1.26.0 192.168.100.11 ...
cluster1-node1 Ready <none> 18h v1.26.0 192.168.100.12 ...
➜ k -n jupiter get pod jupiter-crew-deploy-8cdf99bc9-klwqt -o yaml | grep nodeName
nodeName: cluster1-node1
➜ k -n jupiter get pod -o wide # or even shorter
➜ k -n venus get all
NAME READY STATUS RESTARTS AGE
pod/api-5979b95578-gktxp 1/1 Running 0 57s
pod/api-5979b95578-lhcl5 1/1 Running 0 57s
pod/frontend-789cbdc677-c9v8h 1/1 Running 0 57s
pod/frontend-789cbdc677-npk2m 1/1 Running 0 57s
pod/frontend-789cbdc677-pl67g 1/1 Running 0 57s
pod/frontend-789cbdc677-rjt5r 1/1 Running 0 57s
pod/frontend-789cbdc677-xgf5n 1/1 Running 0 57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/api ClusterIP 10.3.255.137 <none> 2222/TCP 37s
service/frontend ClusterIP 10.3.255.135 <none> 80/TCP 57s
...
Then we use any frontend Pod and check if it can reach external names and the api Service :
We see Pods of frontend can reach the api and external names.
Now we head to https://kubernetes.io/docs, search for NetworkPolicy , copy the example code and adjust it to:
Notice that we specify two egress rules in the yaml above. If we specify multiple egress rules then these are connected using a logical OR. So in
the example above we do:
Let's have a look at example code which wouldn't work in our case:
In the yaml above we only specify one egress rule with two selectors. It can be translated into:
Apply the correct policy:
➜ k -n venus run tmp --restart=Never --rm -i --image=busybox -i -- wget -O- frontend:80
Connecting to frontend:80 (10.3.245.9:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
➜ k -n venus run tmp --restart=Never --rm --image=busybox -i -- wget -O- api:2222
Connecting to api:2222 (10.3.250.233:2222)
<html><body><h1>It works!</h1></body></html>
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- http://www.google.com
Connecting to http://www.google.com (216.58.205.227:80)
100% |********************************| 12955 0:00:00 ETA
<!doctype html>
...
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222
<html><body><h1>It works!</h1></body></html>
Connecting to api:2222 (10.3.255.137:2222)
100% |********************************| 45 0:00:00 ETA
...
vim 20_np1.yaml
# 20_np1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np1
namespace: venus
spec:
podSelector:
matchLabels:
id: frontend # label of the pods this policy should be applied on
policyTypes:
Egress # we only want to control egress
egress:
to: # 1st egress rule
podSelector: # allow egress only to pods with api label
matchLabels:
id: api
ports: # 2nd egress rule
port: 53 # allow DNS UDP
protocol: UDP
port: 53 # allow DNS TCP
protocol: TCP
allow outgoing traffic if
(destination pod has label id:api) OR ((port is 53 UDP) OR (port is 53 TCP))
# this example does not work in our case
...
egress:
to: # 1st AND ONLY egress rule
podSelector: # allow egress only to pods with api label
matchLabels:
id: api
ports: # STILL THE SAME RULE but just an additional selector
port: 53 # allow DNS UDP
protocol: UDP
port: 53 # allow DNS TCP
protocol: TCP
allow outgoing traffic if
(destination pod has label id:api) AND ((port is 53 UDP) OR (port is 53 TCP))
And try again, external is not working any longer:
Internal connection to api work as before:
Question 21 | Requests and Limits, ServiceAccount
Task weight: 4%
Team Neptune needs 3 Pods of image httpd:2.4-alpine, create a Deployment named neptune-10ab for this. The containers should be
named neptune-pod-10ab.Each container should have a memory request of 20Mi and a memory limit of 50Mi.
Team Neptune has it's own ServiceAccount neptune-sa-v2 under which the Pods should run. The Deployment should be in Namespace
neptune.
Answer:
Now make the required changes using vim:
Then create the yaml:
k -f 20_np1.yaml create
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- http://www.google.de
Connecting to http://www.google.de:2222 (216.58.207.67:80)
^C
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- -T 5 http://www.google.de:80
Connecting to http://www.google.com (172.217.203.104:80)
wget: download timed out
command terminated with exit code 1
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222
<html><body><h1>It works!</h1></body></html>
Connecting to api:2222 (10.3.255.137:2222)
100% |********************************| 45 0:00:00 ETA
k -n neptune create deployment -h # help
k -n neptune create deploy -h # deploy is short for deployment
# check the export on the very top of this document so we can use $do
k -n neptune create deploy neptune-10ab --image=httpd:2.4-alpine $do > 21 .yaml
k -n sun get pod -l type=runner # only pods with label runner
k label -h # help
k -n sun label pod -l type=runner protected=true # run for label runner
k -n sun label pod -l type=worker protected=true # run for label worker
k -n sun label pod -l "type in (worker,runner)" protected=true
This is a preview of the full CKAD Simulator course content.
The full course contains 22 questions and scenarios which cover all the CKAD areas. The course also provides a browser terminal which is a
very close replica of the original one. This is great to get used and comfortable before the real exam. After the test session (120 minutes), or if
you stop it early, you'll get access to all questions and their detailed solutions. You'll have 36 hours cluster access in total which means even
after the session, once you have the solutions, you can still play around.
The following preview will give you an idea of what the full course will provide. These preview questions are not part of the 22 in the full
course but in addition to it. But the preview questions are part of the same CKAD simulation environment which we setup for you, so with
access to the full course you can solve these too.
The answers provided here assume that you did run the initial terminal setup suggestions as provided in the tips section, but especially:
These questions can be solved in the test environment provided through the CKA Simulator
Preview Question 1
In Namespace pluto there is a Deployment named project-23-api. It has been working okay for a while but Team Pluto needs it to be more
reliable. Implement a liveness-probe which checks the container to be reachable on port 80. Initially the probe should wait 10 , periodically 15
seconds.
The original Deployment yaml is available at /opt/course/p1/project-23-api.yaml. Save your changes at /opt/course/p1/project-23-
api-new.yaml and apply the changes.
Answer
First we get an overview:
To note: we see another Pod here called holy-api which is part of another section. This is often the case in the provided scenarios, so be
careful to only manipulate the resources you need to. Just like in the real world and in the exam.
Next we use nginx:alpine and curl to check if one Pod is accessible on port 80:
We could also use busybox and wget for this:
Now that we're sure the Deployment works we can continue with altering the provided yaml:
k -n sun get pod -l protected=true -o yaml | grep -A 8 metadata:
alias k=kubectl
export do="--dry-run=client -o yaml"
➜ k -n pluto get all -o wide
NAME READY STATUS ... IP ...
pod/holy-api 1/1 Running ... 10.12.0.26 ...
pod/project-23-api-784857f54c-dx6h6 1/1 Running ... 10.12.2.15 ...
pod/project-23-api-784857f54c-sj8df 1/1 Running ... 10.12.1.18 ...
pod/project-23-api-784857f54c-t4xmh 1/1 Running ... 10.12.0.23 ...
NAME READY UP-TO-DATE AVAILABLE ...
deployment.apps/project-23-api 3/3 3 3 ...
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.12.2.15
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
➜ k run tmp --restart=Never --rm --image=busybox -i -- wget -O- 10.12.2.15
Connecting to 10.12.2.15 (10.12.2.15:80)
writing to stdout
100% |********************************| 612 0:00:00 ETA
written to stdout
<title>Welcome to nginx!</title>
Add the liveness-probe to the yaml:
Then let's apply the changes:
Next we wait 10 seconds and confirm the Pods are still running:
We can also check the configured liveness-probe settings on a Pod or the Deployment :
Preview Question 2
Team Sun needs a new Deployment named sunny with 4 replicas of image nginx:1.17.3-alpine in Namespace sun. The Deployment and its
Pods should use the existing ServiceAccount sa-sun-deploy.
Expose the Deployment internally using a ClusterIP Service named sun-srv on port 9999. The nginx containers should run as default on port
80. The management of Team Sun would like to execute a command to check that all Pods are running on occasion. Write that command into
file /opt/course/p2/sunny_status_command.sh. The command should use kubectl.
cp /opt/course/p1/project-23-api.yaml /opt/course/p1/project-23-api-new.yaml
vim /opt/course/p1/project-23-api-new.yaml
➜ k -n earth get ep
NAME ENDPOINTS AGE
earth-2x3-api-svc 10.0.0.10:80,10.0.1.5:80,10.0.2.4:80 116m
earth-2x3-web-svc 10.0.0.11:80,10.0.0.12:80,10.0.1.6:80 + 3 more... 116m
earth-3cc-web
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-api-svc.earth:4546
...
<html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-web-svc.earth:4545
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000
<html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-3cc-web.earth:6363
If you don't see a command prompt, try pressing enter.
0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0
curl: (28) Connection timed out after 5000 milliseconds
pod "tmp" deleted
pod default/tmp terminated (Error)
➜ k -n earth get deploy earth-3cc-web
NAME READY UP-TO-DATE AVAILABLE AGE
earth-3cc-web 0/4 4 0 7m18s
k -n earth edit deploy earth-3cc-web
# k -n earth edit deploy earth-3cc-web
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
generation: 3 # there have been rollouts
name: earth-3cc-web
namespace: earth
...
spec:
...
template:
metadata:
creationTimestamp: null
labels:
id: earth-3cc-web
spec:
containers:
image: nginx:1.16.1-alpine
imagePullPolicy: IfNotPresent
name: nginx
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 20
successThreshold: 1
tcpSocket:
port: 82 # this port doesn't seem to be right, should be 80
timeoutSeconds: 1
...
Running, but still not in ready state. Wait 10 seconds (initialDelaySeconds of readinessProbe) and check again:
Let's check the service again:
We did it! Finally we write the reason into the requested location:
CKAD Tips Kubernetes 1. 26
In this section we'll provide some tips on how to handle the CKAD exam and browser terminal.
Knowledge
Study all topics as proposed in the curriculum till you feel comfortable with all