You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Docker Desktop replaces the docker engine installed above. Don't run any docker container before installing "Docker Desktop" (or don't install Docker Desktop, it is not mandatory to run docker containers).
Download the .deb package, move it to a place where user root has permissions like /tmp and then install .deb from there
sudo apt install ./docker-desktop-xxxxx.deb
add your user to docker group and refresh group settings
sudo usermod -aG docker $USER
newgrp docker
or reboot
7. get an image and extract its contents
docker run hello-world
docker container ls --all
cd /tmp
docker export abc123 > hello-world.tar
mkdir /tmp/hello
untar hello-world.tar --directory /tmp/hello
UFW
Docker adds itself to iptables (Linux Firewall) in a higher priority than user settings, ufw just manages "user firewall", it does not manage other rules in the iptables. Even if we deny some ports with ufw, the Docker rules are added first and will allow anyway.
The Docker documentation says that we need to use DOCKER-USER to apply rules. https://docs.docker.com/network/packet-filtering-firewalls/
List all rules with sudo iptables -L. iptables is a chain of rules, one chain may contain rules, and call other chains.
Chain FORWARD calls these other chains DOCKER-USER, DOCKER and ufw-user-forward. ufw-user-forward calls ufw-user-input. DOCKER allows all from local networks, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. DOCKER-USER is initially empty.
Command ufw only adds rules to ufw-user-input.
Export the current config sudo iptables-save > $HOME/iptables-save_yyyymmdd.txt.
Make a copy, $HOME/iptables-save_edited.txt.
Copy the lines that contain your user rules -A ufw-user-input xxxxxxx, paste them after line -A DOCKER-USER -j ufw-user-forward and change the beginning to -A DOCKER-USER xxxxxxx.
-A DOCKER-USER -j ufw-user-forward
# these are the copied lines
-A DOCKER-USER -p tcp -m tcp --dport 5000 -j DROP
-A DOCKER-USER -p udp -m udp --dport 5000 -j DROP
# these lines were created with `sudo ufw deny to any port 5000`
-A ufw-user-input -p tcp -m tcp --dport 5000 -j DROP
-A ufw-user-input -p udp -m udp --dport 5000 -j DROP
Apply the changes with sudo iptables-apply $HOME/iptables-save_edited.txt or sudo iptables-restore < $HOME/iptables-save_edited.txt.
# start minekube services
minekube start
minekube start --driver=docker # use this in case of GUEST_STATUS error, make sure your $USER is in docker group# add kubectl as alias to make easier next commandsalias kubectl="minekube kubectl --"# find minekube serving http URL
kubectl cluster-info
# get nodes
kubectl get nodes
# get namespaces (each group of pods). Pods in a namespace share some resources, but different namespaces are isolated
kubectl get namespaces
kubectl get ns
# list all items with -A
kubectl get pods -A
kubectl get services -A
kubectl get deployments -A
# -A will list items from all namespaces. For a specific namespace, let's say "development"
kubectl get deployments -n development
kubectl get pods -n development
# delete a pod
kubectl delete pod <NAME> -n <NAMESPACE># create/deploy pods already with new namespaces using a .yaml file
kubectl apply -f namespace.yaml
# delete what has been created from the .yaml file
kubectl delete -f namespace.yaml
# get informations about a pod (IP, port, namespace, start date, Events)
kubectl describe pod <NAME> -n <NAMESPACE># find the pod IP
kubectl get pods -n <NAMESPACE> -o wide
# enter the pod in vanilla shell
kubectl exec -it <NAME> -- /bin/sh
# get pod's logs
kubectl logs <NAME> -n <NAMESPACE># open external connections (create a LoadBalancer service using another .yaml file)
minikube tunnel
# delete minikube itself
minikube delete
Metrics Server
Monitor CPU/Memory and other metrics. In memory only, so, no history is saved.
minikube addons enable metrics-server
If you are not using Minikube, check the metrics-server instructions on GitHub.
More info on Metrics Server from Kubernetes instructions below.
kubectl run nginx --image nginx
kubectl get all
kubectl get pods
kubectl get pods -o wide
kubectl describe pod nginx
kubectl apply -f pod.yaml
kubectl delete pod mypod
kubectl get pods --watch
# filter list based on labels# labelTag=labelValue
kubectl get all --selector env=prod,bu=finance,tier=frontend
# create pod with label "tier=db"
kubectl run redis -l tier=db --image=redis:alpine
# create pod and expose port in the container, not as service
kubectl run custom-nginx --image=nginx --port=8080
# create pod and expose port as service
kubectl run httpd --image=httpd:alpine --port=80 --expose
# get different objects
kubectl get pods,svc
# pods can be in different namespaces
kubectl get pods --all-namespaces
kubectl get pods -A
When editing a Pod, it accepts changing the container details (image version, ...) but it doesn't accept adding a new container, or changing secrets, etc.
Kubectl saves the unapplied changes in /tmp/somefile.yaml, we can use it to --force the change.
kubectl replace --force -f /tmp/somefile.yaml
Nodes
kubectl get nodes
kubectl get nodes -o wide
kubectl get node node01 --show-labels
kubectl rollout status deployment/myapp-deployment
kubectl rollout history deployment/myapp-deployment
Rollback
kubectl rollout undo deployment/myapp-deployment
Service
kubectl create -f service.yaml
kubectl get svc
kubectl get services
minikube service myapp-service --url
# get different objects
kubectl get pods,svc
# Expose existing deployment (and all pods created by it)
kubectl expose deployment simple-api-deployment --type=LoadBalancer --port=3000
kubectl expose deployment myapp-deploy --type=NodePort --port=8080
# one redis pod must exist
kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml
# change "selector" after the yaml is created
kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml
kubectl port-forward --address 0.0.0.0 -n kubernetes-dashboard service/kubernetes-dashboard 8080:80 &
Working with namespaces/contexts
kubectl get all --namespace=kube-system
kubectl get pods --all-namespaces
kubectl get namespace
kubectl get ns
kubectl create namespace dev-ns
kubectl create ns dev-ns
kubectl get all --namespace=mynamespace
kubectl get all -n=mynamespace
kubectl run redis --image=redis -n=mynamespace
# to connect to other namespace# myelement.mynamespace.svc.cluster.local
kubectl config current-context
kubectl config get-contexts
kubectl config current-context
# change context to avoid typing --namespace=mycontext all the time
kubectl config set-context --current --namespace=mycontext
Taints/Toleration - avoid receiving some key or no key, but pod with key can go to some other node (with no taint)
Nodes have Taints, Pods have Tolerations
Affinity - doesn't allow other keys, but pods with no key can come in
Nodes have Labels, Pods have Affinities
# Taint and Tolerance
kubectl taint node myNode metaKey=metaValue:taintEffect
# NoSchedule | PreferNoSchedule | NoExecute
kubectl taint node node1 app=blue:NoSchedule
# remove the taint by adding - at the end
kubectl taint node myNode metaKey=metaValue:taintEffect-
# Add labels to nodes
kubectl label nodes myNode metaKey=metaValue
# in the pod creation, add a section "nodeSelector" to allow pods to run only in specific nodes
To make sure your configuration respects all, define Taints/Toleration and Affinity in nodes and pods.
These are containers that are executed before the normal containers, and they are expected to perform few tasks and exit. A Pod definition may contain more than one initContainer.
apiVersion: v1kind: Podmetadata:
name: myapp-podlabels:
app: myappspec:
containers:
- name: myapp-containerimage: busybox:1.28command: ['sh', '-c', 'echo The app is running! && sleep 3600']initContainers:
- name: init-myserviceimage: busybox:1.28command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydbimage: busybox:1.28command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
Clusters, Updates, Backups
Empty the node of all applications and mark it unschedulable. Existing pods will be deleted, and if they are part of replicaSet, they are recreated in another node.
kubectl drain node01 --ignore-daemonsets
To make them schedulable again, use uncordon.
kubectl uncordon node01
To mark node as unschedulable but without deleting the pods, use cordon.
kubectl cordon node01
Kubernetes work with amny different components doing different tasks. They can be in different versions, but the versions need to be close to each other so that the commands won't break with higher versions.
The kube-apiserver is the reference X, controller-manager and kube-scheduler can be X-1 while kubelet and kube-proxy can be X-2. The kubectl is special as it can be higher than apiserver, X+1 or even lower X-1.
This way restrict us from jumping versions when upgrading a cluster. We need to upgrade in steps of minor versions at a time.
When upgrading a cluster of nodes, upgrade the Master node first, then the Worker nodes. While Master node is down, the application is still running, just the admin functions are down (ie health status, recovery).
Once the master node is complete, different stragies can be used to upgrade the worker nodes.
One node at a time. Move the pods to a different node, upgrade the current node, move all the pods from the second node to the first, upgrade the second node, rearrange the pods as they were before the upgrade.
Create a new node with the new version, add to the cluster, move the pods from the current node to the new node, decommission the current node.
# update keyring (read doc)# update apt sourcelist (read doc)
apt update
apt-cache madison kubeadm # verify the versions
In the upgrade commands, set the version so it won't skip intermediate ones.
apt-mark unhold kubeadm
apt update
apt-cache madison kubeadm
apt upgrade kubeadm=1.29.0-1.1
apt-mark hold kubeadm
In the master node, upgrade kubeadm. Check the plan to see what is available.
kubeadm upgrade plan
kubeadm upgrade apply v1.29.0
After that, if we run kubectl get nodes it will show the kubelet version of the node, the the kubeadm. Some times we don't have kubelet in the Master node. Upgrade the kubelet in the Master node first.
kubectl drain node01 --ignore-daemonsets # Move the pods to a different node.
apt upgrade kubeadm=1.29.0-1.1 # upgrade kubeadm
kubeadm upgrade node # upgrade the node (instead of `apply`)# course shows this command, but not the documentation# kubeadm upgrade node config --kubelet-version v1.29.0
apt upgrade kubelet=1.29.0-1.1 # upgrade kubelet
apt upgrade kubectl=1.29.0-1.1 # upgrade kubectl
systemctl daemon-reload # restart the daemons
systemctl restart kubelet # restart the services
kubectl uncordon node01 # mark node as schedulable, it won't move the pods back
Repeat for other nodes.
Backups manual
Save Imperative commands kubectl create xxx.
Save Declarative files, mypod.yml.
Extract all definitions
kubectl get all --all-namespaces -o yaml > all-deployed-services.yaml
Use tools like Velero (formerly ARK)
Backup from ETCD
Grab data from Etcd Cluster. Check the ExecStart command from etcd.service, find option --data-dir=/var/lib/etcd.
Save etcd snapshot
ETCDCTL_API=3 etcdctl snapshot save /path/to/snapshot.db
ETCDCTL_API=3 etcdctl snapshot status /path/to/snapshot.db
Obs: when the etcd cluster is protected by signed certificates, we need to apply the flags with the correct current locations.
# get the current config
kubectl describe pod etcd-controlplane -n kube-system
ETCDCTL_API=3 etcdctl \
snapshot save /path/to/snapshot.db \
--endpoints=https://[127.0.0.1]:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
To restore, stop the services, then restore giving a new location for the new etcd cluster to avoid old services to start in the new environment.
service kube-apiserver stop
ETCDCTL_API=3 etcdctl snapshot restore /path/to/snapshot.db --data-dir /var/lib/etcd-from-backup
Set the etcd.service to start from the new location too.
# edit etcd.service
ExecStart=/usr/local/bin/etcd ... --data-dir=/var/lib/etcd-from-backup
# or edit manifest
vi /etc/kubernetes/manifests/etcd.yaml
# volumes:# - hostPath# path: /var/lib/etcd-from-backup
After editing the yaml file, etcd-controlplane will auto restart because this is a static deployment, thus, the kubectl command won't work for a few minutes.
Follow the state with watch
kubectl get pod --all-namespaces --watch
If it hangs in Pending state, try to restart the services.
# restart services
systemctl daemon-reload
service etcd restart
service kube-apiserver start
Check the Liveness and Startup health check kubectl describe pod etcd-controlplane -n kube-system.
If it still doesn't work, try to delete the etcd-controlplane pod (will be auto recreated).
kubectl delete pod etcd-controlplane -n kube-system
Networking in Linux
ip link
ip addr
ip a
ip a add 192.168.1.20/24 dev eth0
route
ip route
ip route add 192.168.1.0/24 via 192.168.2.1
ip route add default via 192.168.2.1
# ip_forward
cat /etc/sysctl.conf
cat /proc/sys/net/ipv4/ip_forward
Net namespaces
ip netns add red
ip netns
ip link
ip netns exec red ip link # runs `ip link` inside `red` namespace
ip -n red link
arp
arp -n
Connecting two namespaces
Create virtual ethernet ports, virtual ethernet connection between namespaces, attach veth to each namespace, add IP to these virtual ports, bring them up
ip link add veth-red type veth peer name veth-blue # veth - virtual ethernet (porta virtual)
ip link set veth-red netns red
ip link set veth-blue netns blue
ip -n red addr add 192.168.15.1 dev veth-red
ip -n blue addr add 192.168.15.2 dev veth-blue
ip -n red link set veth-red up
ip -n blue link set veth-blue up
ip netns exec red ping 192.168.15.2
ip netns exec red arp
ip netns exec blue route
ip -n red link del veth-red # deletes both veth as they are connected
Bridge
Create a bridge, connect all namespaces to the bridge instead of each other individually
# create bridge
ip link add v-net-0 type bridge
ip link set dev v-net-0 up
# create virtual ports connecting namespace to bridge
ip link add veth-red type veth peer name veth-red-br
ip link add veth-blue type veth peer name veth-blue-br
# connect red to bridge
ip link set veth-red netns red
ip link set veth-red-br master v-net-0
# connect blue to bridge
ip link set veth-blue netns blue
ip link set veth-blue-br master v-net-0
# set IPs
ip -n red addr add 192.168.15.1 dev veth-red
ip -n blue addr add 192.168.15.2 dev veth-blue
ip -n red link set veth-red up
ip -n blue link set veth-blue up
# make host see the bridge
ip addr add 192.168.15.5/24 dev v-net-0
# allow `blue` send data to outside
ip netns exec blue ip route add 192.168.1.0/24 via 192.168.15.5
ip netns exec blue ip route add default via 192.168.15.5
# make host work as NAT, to send packages from v-net-0 to other networks
iptables -t nat -A POSTROUTING -s 192.168.15.0/24 -j MASQUERADE
ip netns exec blue ping 8.8.8.8
# forward port 80 from outside into `blue`
iptables -t nat -A PREROUTING --dport 80 --to-destination 192.168.15.2:80 -j DNAT
# view rules
sudo iptables -nvL -t nat # DNAT
Command bridge does many of the steps (create bridge, create virtual port, attach to namespace, attach each other, assing IP, bring interface up, enable NAT)
# create the namespace before `bridge`, get the ${ns_id}
bridge add ${ns_id} /var/run/netns/${ns_id}
CNI - Container Networking Interface
Are standards to work with containers, but Docker doesn't follow these rules, it uses CNM - Container Network Model.
Kubernets creates a Docker container in the none network, and then uses CNI rules to create the connections.
Download binary from https://podman-desktop.io/ . The Flatpak won't have root access.
Extract the compressed file.
Change owner and permissions of chrome-sandbox.
Create a symlink for the socket. The original socket only exists when podman-desktop is running, but leave the symlink in place to be ready for use when needed.