Skip to content

Instantly share code, notes, and snippets.

@brccabral
Last active May 20, 2025 17:35
Show Gist options
  • Save brccabral/9cd7855c35fa4c18d0247970a83d4cb0 to your computer and use it in GitHub Desktop.
Save brccabral/9cd7855c35fa4c18d0247970a83d4cb0 to your computer and use it in GitHub Desktop.
Docker, Minikube, Kubernetes, Podman

Docker, Minikube, Kubernetes

Docker Ubuntu

  1. Install dependencies
sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
  1. Add keyrings (secure installation)
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  1. Add to apt sources.list
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  1. Install docker engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

IMPORTANT

Docker Desktop replaces the docker engine installed above. Don't run any docker container before installing "Docker Desktop" (or don't install Docker Desktop, it is not mandatory to run docker containers).

  1. Download the .deb package, move it to a place where user root has permissions like /tmp and then install .deb from there
sudo apt install ./docker-desktop-xxxxx.deb
  1. add your user to docker group and refresh group settings
sudo usermod -aG docker $USER
newgrp docker

or reboot
7. get an image and extract its contents

docker run hello-world
docker container ls --all
cd /tmp
docker export abc123 > hello-world.tar
mkdir /tmp/hello
untar hello-world.tar --directory /tmp/hello

UFW

Docker adds itself to iptables (Linux Firewall) in a higher priority than user settings, ufw just manages "user firewall", it does not manage other rules in the iptables. Even if we deny some ports with ufw, the Docker rules are added first and will allow anyway.
The Docker documentation says that we need to use DOCKER-USER to apply rules. https://docs.docker.com/network/packet-filtering-firewalls/
List all rules with sudo iptables -L. iptables is a chain of rules, one chain may contain rules, and call other chains.
Chain FORWARD calls these other chains DOCKER-USER, DOCKER and ufw-user-forward.
ufw-user-forward calls ufw-user-input.
DOCKER allows all from local networks, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16.
DOCKER-USER is initially empty.
Command ufw only adds rules to ufw-user-input.
Export the current config sudo iptables-save > $HOME/iptables-save_yyyymmdd.txt.
Make a copy, $HOME/iptables-save_edited.txt.
Copy the lines that contain your user rules -A ufw-user-input xxxxxxx, paste them after line -A DOCKER-USER -j ufw-user-forward and change the beginning to -A DOCKER-USER xxxxxxx.

-A DOCKER-USER -j ufw-user-forward
# these are the copied lines
-A DOCKER-USER -p tcp -m tcp --dport 5000 -j DROP
-A DOCKER-USER -p udp -m udp --dport 5000 -j DROP
# these lines were created with `sudo ufw deny to any port 5000`
-A ufw-user-input -p tcp -m tcp --dport 5000 -j DROP
-A ufw-user-input -p udp -m udp --dport 5000 -j DROP

Apply the changes with sudo iptables-apply $HOME/iptables-save_edited.txt or sudo iptables-restore < $HOME/iptables-save_edited.txt.

Commands

  • List all IPs for each container

    • docker ps -q | xargs -n 1 docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}} {{ .Name }}' | sed 's/ \// /'
  • Copy from container, copy to container

    sudo docker cp containerName:/path/to/file.py $HOME/file.py
    sudo docker cp $HOME/file.py containerName:/path/to/file.py

Minekube

https://minikube.sigs.k8s.io/

  1. Install minekube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

Cheat sheet

# start minekube services
minekube start
minekube start --driver=docker # use this in case of GUEST_STATUS error, make sure your $USER is in docker group

# add kubectl as alias to make easier next commands
alias kubectl="minekube kubectl --"

# find minekube serving http URL 
kubectl cluster-info

# get nodes
kubectl get nodes

# get namespaces (each group of pods). Pods in a namespace share some resources, but different namespaces are isolated
kubectl get namespaces
kubectl get ns

# list all items with -A
kubectl get pods -A
kubectl get services -A
kubectl get deployments -A

# -A will list items from all namespaces. For a specific namespace, let's say "development"
kubectl get deployments -n development
kubectl get pods -n development

# delete a pod
kubectl delete pod <NAME> -n <NAMESPACE>

# create/deploy pods already with new namespaces using a .yaml file
kubectl apply -f namespace.yaml

# delete what has been created from the .yaml file
kubectl delete -f namespace.yaml

# get informations about a pod (IP, port, namespace, start date, Events)
kubectl describe pod <NAME> -n <NAMESPACE>

# find the pod IP 
kubectl get pods -n <NAMESPACE> -o wide

# enter the pod in vanilla shell
kubectl exec -it <NAME> -- /bin/sh

# get pod's logs
kubectl logs <NAME> -n <NAMESPACE>

# open external connections (create a LoadBalancer service using another .yaml file)
minikube tunnel

# delete minikube itself
minikube delete

Metrics Server

Monitor CPU/Memory and other metrics. In memory only, so, no history is saved.

minikube addons enable metrics-server

If you are not using Minikube, check the metrics-server instructions on GitHub.

More info on Metrics Server from Kubernetes instructions below.

Kubernetes

Useful commands

Tips

Create a yaml file from kubectl run command

# POD
kubectl run redis --image=redis --dry-run=client -o yaml > redis-definition.yaml
# Deployment
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml

Pods

kubectl run nginx --image nginx
kubectl get all

kubectl get pods
kubectl get pods -o wide
kubectl describe pod nginx
kubectl apply -f pod.yaml
kubectl delete pod mypod
kubectl get pods --watch

# filter list based on labels
# labelTag=labelValue
kubectl get all --selector env=prod,bu=finance,tier=frontend

# create pod with label "tier=db"
kubectl run redis -l tier=db --image=redis:alpine

# create pod and expose port in the container, not as service
kubectl run custom-nginx --image=nginx --port=8080
# create pod and expose port as service
kubectl run httpd --image=httpd:alpine --port=80 --expose

# get different objects
kubectl get pods,svc

# pods can be in different namespaces
kubectl get pods --all-namespaces
kubectl get pods -A

When editing a Pod, it accepts changing the container details (image version, ...) but it doesn't accept adding a new container, or changing secrets, etc.
Kubectl saves the unapplied changes in /tmp/somefile.yaml, we can use it to --force the change.

kubectl replace --force -f /tmp/somefile.yaml

Nodes

kubectl get nodes
kubectl get nodes -o wide
kubectl get node node01 --show-labels

ReplicaController (old) / ReplicaSet

kubectl get replicationcontroller
kubectl create -f replication-controller.yaml

kubectl create -f replica-set.yaml
kubectl get replicaset
kubectl get rs
kubectl replace -f replica-set.yaml
kubectl scale --replicas=6 -f replica-set.yaml
kubectl scale --replicas=6 replicaset myapp-replicaset
kubectl delete replicaset myreplicaset
kubectl describe replicaset myreplicaset
kubectl edit replicaset myreplicaset

Deployment, Rollouts

  • Create
kubectl create -f deployment-definition.yml
kubectl create deployment mydeployment --image=httpd:2.4-alpine --replicas=3
# --record deprecated = kubectl create -f deployment.yaml --record
  • Get
kubectl get deploy
kubectl get deployment
kubectl get deployments
  • Update
kubectl apply -f deployment-definition.yml
kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1
# --record deprecated = kubectl set image deployment/mydeployment nginx=nginx:1.9.1 --record
kubectl edit deployment mydeployment
# --record deprecated = kubectl edit deployment mydeployment --record
  • Status
kubectl rollout status deployment/myapp-deployment
kubectl rollout history deployment/myapp-deployment
  • Rollback
kubectl rollout undo deployment/myapp-deployment

Service

kubectl create -f service.yaml
kubectl get svc
kubectl get services
minikube service myapp-service --url

# get different objects
kubectl get pods,svc

# Expose existing deployment (and all pods created by it)
kubectl expose deployment simple-api-deployment --type=LoadBalancer --port=3000
kubectl expose deployment myapp-deploy --type=NodePort --port=8080
# one redis pod must exist
kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml
# change "selector" after the yaml is created
kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml

kubectl port-forward --address 0.0.0.0 -n kubernetes-dashboard service/kubernetes-dashboard 8080:80 &

Working with namespaces/contexts

kubectl get all --namespace=kube-system
kubectl get pods --all-namespaces

kubectl get namespace
kubectl get ns
kubectl create namespace dev-ns
kubectl create ns dev-ns

kubectl get all --namespace=mynamespace
kubectl get all -n=mynamespace


kubectl run redis --image=redis -n=mynamespace

# to connect to other namespace
# myelement.mynamespace.svc.cluster.local

kubectl config current-context
kubectl config get-contexts
kubectl config current-context
# change context to avoid typing --namespace=mycontext all the time
kubectl config set-context --current --namespace=mycontext

DaemonSets

# DaemonSets
kubectl get daemonsets --all-namespaces
kubectl describe daemonset kube-flannel-ds --namespace=kube-flannel

Events

# Events
kubectl get events -o wide
kubectl logs my-custom-scheduler --name-space=kube-system

Taints, Tolerantions and Affinity

Taints/Toleration - avoid receiving some key or no key, but pod with key can go to some other node (with no taint)
Nodes have Taints, Pods have Tolerations

Affinity - doesn't allow other keys, but pods with no key can come in
Nodes have Labels, Pods have Affinities

# Taint and Tolerance
kubectl taint node myNode metaKey=metaValue:taintEffect
# NoSchedule | PreferNoSchedule | NoExecute
kubectl taint node node1 app=blue:NoSchedule
# remove the taint by adding - at the end
kubectl taint node myNode metaKey=metaValue:taintEffect-

# Add labels to nodes
kubectl label nodes myNode metaKey=metaValue
# in the pod creation, add a section "nodeSelector" to allow pods to run only in specific nodes

To make sure your configuration respects all, define Taints/Toleration and Affinity in nodes and pods.

---
apiVersion: v1
kind: Pod
metadata:
  name: bee
spec:
  containers:
  - image: nginx
    name: bee
  tolerations:
  - key: spray
    value: mortein
    effect: NoSchedule
    operator: Equal
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  replicas: 3
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - blue

                
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: red
spec:
  replicas: 2
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists

Static PODs

Static Pods are pods created via file stored in the host, not using kubectl.

To set the path to store Static Pods, in a kubelet.service start command we can set kubeconfig which is the path to a config.yaml file

ExecStart=/usr/local/bin/kubelet \\
    --config=kubeconfig.yaml \\
    --kubeconfig=/var/lib/kubelet/kubeconfig

kubeconfig.yaml

staticPodPath: /etc/bubernetes/manifest

Or we can pass the path directly via pod-manifest-path option in a kubelet.service start command

ExecStart=/usr/local/bin/kubelet \\
    --pod-manifest-path=/etc/Kubernetes/manifests

Creating a static pod: use dry-run to export the yaml to a file in the static path.

kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml

Scheduler

kubectl get pods kube-scheduler-controlplane --namespace=kube-system
kubectl get pods kube-scheduler-controlplane -n kube-system
kubectl get pods kube-scheduler-controlplane --namespace=kube-system -o yaml
kubectl get pods kube-scheduler-controlplane -n kube-system -o yaml

kubectl get serviceaccount -n kube-system

kubectl get clusterrolebinding 

# ConfigMap
kubectl create configmap my-scheduler-config --from-file=/root/my-scheduler-config.yaml -n kube-system
kubectl get cofnigmap my-scheduler-config -n kube-system

Create a custom scheduler as a pod, pointing to a custom config.yaml
my-custom-scheduler.yaml

---
apiVersion: v1
kind: Pod
metadata:
    name: my-custom-scheduler
    namespace: kube-system
spec:
    containers:
    - command:
        - kube-scheduler
        - --address=127.0.0.1
        - --kubeconfig=/etc/kubernetes/scheduler.conf
        - --config=/etc/kubernetes/my-scheduler-config.yaml
        image: k8s.gcr.io/kube-scheduler-amd64:v1.11.3
        name: kube-scheduler

Create the custom config.yaml
my-scheduler.config.yaml

---
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: my-scheduler
leaderElection:
    leaderElect: true
    resourceNamespace: kube-system
    resourceName: lock-object-my-scheduler

Create a pod that uses the custom scheduler.
pod-definition.yaml

---
apiVersion: v1
kind: Pod
metadata:
    name: nginx
spec:
    containers:
    - image: nginx
      name: nginx
    schedulerName: my-custom-scheduler

The ConfigMap uses the config.yaml we pass as a yaml data configuration.
my-scheduler-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: my-scheduler-config
  namespace: kube-system
data:
  my-scheduler-config.yaml: |
    apiVersion: kubescheduler.config.k8s.io/v1
    kind: KubeSchedulerConfiguration
    profiles:
      - schedulerName: my-scheduler
    leaderElection:
      leaderElect: false

Schedulers have plugins and extension points.

Plugins

Scheduling Queue Filtering Scoring Binding
PrioritySort NodeResourcesFit NodeResourcesFit DefaultBinder
NodeName ImageLocality
NodeUnschedulable

Extension points

Scheduling Queue Filtering Scoring Binding
queueSort prefFilter preScore permit
filter score PreBind
postFilter reserve bind
postBind

Creating custom schedulers.

apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: my-scheduler-2
  plugins:
    score:
      disabled:
       - name: TaintToleration
      enabled:
       - name: MyCustomPluginA
       - name: MyCustomPluginB
- schedulerName: my-scheduler-3
  plugins:
    preScore:
      disabled:
       - name: '*'
    score:
      disabled:
       - name: '*'
- schedulerName: my-scheduler-4

Monitoring

Metrics Server

Monitor CPU/Memory and other metrics. In memory only, so, no history is saved.
Check https://github.com/kubernetes-sigs/metrics-server/ for more details.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl top node
kubectl top pod

Command line arguments

ENTRYPOINT is command
CMD is args

  • Dockerfile
ENTRYPOINT ["python" , "app.py"]
CMD ["--color", "red"]
  • Kubernetes
spec:
  containers:
  - name: MyName
    image: repo/app
    command: ["python" , "app.py"]
    args: ["--color", "red"]
kubectl run myapp --image=repo/app --comand -- <cmd> <args>
kubectl run myapp --image=repo/app -- <args>

Environment variables

kubectl create configmap MyConfig --from-literal=VAR_NAME=var_value --from-literal=VAR2=value2
kubectl create configmap MyConfig --from-file=myapp.properties

"MyConfig.yml"

apiVersion: v1
king: ConifMap
metadata:
  name: MyConfig
data:
  VAR_NAME: var_value
  VAR2: value2
kubectl get configmaps
kubectl describe configmaps

"pod.yml"

spec:
  containers:
    envFrom:
      - configMapRef:
          name: MyConfig # created above
spec:
  containers:
    env:
      - name: VAR_NAME
        valueFrom:
          cnfigMapKeyRef:
            name: MyConfig
            key: VAR_NAME
spec:
  containers:
    volumes:
      - name: MyConfigVolume
        configMap:
          name: MyConfig

Secrets

kubectl create secret generic MySecrets --from-literal=DB_PASS=strong --from-literal=DB_USER=someone
kubectl create secret generic MySecrets --from-file=secrets.properties

In "MySecrets.yml", store values as base64 (not encrypted, just encoded).

echo -n 'strong' | base64
echo -n 'c3Ryb25n' | base64 --decode
apiVersion: v1
king: Secret
metadata:
  name: MySecrets
data:
  DB_PASS: echo -n 'strong' | base64
  DB_USER: c29tZW9uZQ==
kubectl get secrets
kubectl get secret MySecrets -o yaml
kubectl describe secrets

"pod.yml"

spec:
  containers:
    - name: myapp
      image: repo/app
      envFrom:
        - secretRef:
            name: MySecrets
spec:
  containers:
    - name: myapp
      image: repo/app
      env:
        - name: DB_PASS
          valueFrom:
            secretKeyRef:
              name: MySecrets
              key: DP_PASS
volumes:
- name: MySecretsVolume
  secret:
    secretName: MySecrets

To store secret data encrypted, need to use Etcd https://github.com/etcd-io/etcd/tree/main/etcdctl, setup kubeadm to use certificates.

apt install etcd-client

initContainers

These are containers that are executed before the normal containers, and they are expected to perform few tasks and exit. A Pod definition may contain more than one initContainer.

    apiVersion: v1
    kind: Pod
    metadata:
      name: myapp-pod
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp-container
        image: busybox:1.28
        command: ['sh', '-c', 'echo The app is running! && sleep 3600']
      initContainers:
      - name: init-myservice
        image: busybox:1.28
        command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
      - name: init-mydb
        image: busybox:1.28
        command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']

Clusters, Updates, Backups

Empty the node of all applications and mark it unschedulable. Existing pods will be deleted, and if they are part of replicaSet, they are recreated in another node.

kubectl drain node01 --ignore-daemonsets

To make them schedulable again, use uncordon.

kubectl uncordon node01

To mark node as unschedulable but without deleting the pods, use cordon.

kubectl cordon node01

Kubernetes work with amny different components doing different tasks. They can be in different versions, but the versions need to be close to each other so that the commands won't break with higher versions.
The kube-apiserver is the reference X, controller-manager and kube-scheduler can be X-1 while kubelet and kube-proxy can be X-2. The kubectl is special as it can be higher than apiserver, X+1 or even lower X-1.
This way restrict us from jumping versions when upgrading a cluster. We need to upgrade in steps of minor versions at a time.

When upgrading a cluster of nodes, upgrade the Master node first, then the Worker nodes. While Master node is down, the application is still running, just the admin functions are down (ie health status, recovery).
Once the master node is complete, different stragies can be used to upgrade the worker nodes.

  1. One node at a time. Move the pods to a different node, upgrade the current node, move all the pods from the second node to the first, upgrade the second node, rearrange the pods as they were before the upgrade.
  2. Create a new node with the new version, add to the cluster, move the pods from the current node to the new node, decommission the current node.

Before the upgrade, update apt list with the newer minor version in the K8s repository. https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/ . Do this in all nodes ssh node01, ssh node01, ...

# update keyring (read doc)
# update apt sourcelist (read doc)
apt update
apt-cache madison kubeadm # verify the versions

In the upgrade commands, set the version so it won't skip intermediate ones.

apt-mark unhold kubeadm
apt update
apt-cache madison kubeadm
apt upgrade kubeadm=1.29.0-1.1
apt-mark hold kubeadm

In the master node, upgrade kubeadm. Check the plan to see what is available.

kubeadm upgrade plan
kubeadm upgrade apply v1.29.0

After that, if we run kubectl get nodes it will show the kubelet version of the node, the the kubeadm. Some times we don't have kubelet in the Master node. Upgrade the kubelet in the Master node first.

apt upgrade kubelet=1.29.0-1.1
apt upgrade kubectl=1.29.0-1.1
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon master

Start the upgrade in a worker node.

kubectl drain node01 --ignore-daemonsets # Move the pods to a different node.
apt upgrade kubeadm=1.29.0-1.1 # upgrade kubeadm
kubeadm upgrade node # upgrade the node (instead of `apply`)
# course shows this command, but not the documentation
# kubeadm upgrade node config --kubelet-version v1.29.0
apt upgrade kubelet=1.29.0-1.1 # upgrade kubelet
apt upgrade kubectl=1.29.0-1.1 # upgrade kubectl
systemctl daemon-reload # restart the daemons
systemctl restart kubelet # restart the services
kubectl uncordon node01 # mark node as schedulable, it won't move the pods back

Repeat for other nodes.

  • Backups manual Save Imperative commands kubectl create xxx.
    Save Declarative files, mypod.yml.
    Extract all definitions
kubectl get all --all-namespaces -o yaml > all-deployed-services.yaml

Use tools like Velero (formerly ARK)

  • Backup from ETCD Grab data from Etcd Cluster. Check the ExecStart command from etcd.service, find option --data-dir=/var/lib/etcd.
    Save etcd snapshot
ETCDCTL_API=3 etcdctl snapshot save /path/to/snapshot.db
ETCDCTL_API=3 etcdctl snapshot status /path/to/snapshot.db

Obs: when the etcd cluster is protected by signed certificates, we need to apply the flags with the correct current locations.

# get the current config
kubectl describe pod etcd-controlplane -n kube-system
ETCDCTL_API=3 etcdctl \
    snapshot save /path/to/snapshot.db \
    --endpoints=https://[127.0.0.1]:2379 \
    --cacert=/etc/kubernetes/pki/etcd/ca.crt \
    --cert=/etc/kubernetes/pki/etcd/server.crt \
    --key=/etc/kubernetes/pki/etcd/server.key

To restore, stop the services, then restore giving a new location for the new etcd cluster to avoid old services to start in the new environment.

service kube-apiserver stop
ETCDCTL_API=3 etcdctl snapshot restore /path/to/snapshot.db --data-dir /var/lib/etcd-from-backup

Set the etcd.service to start from the new location too.

# edit etcd.service
ExecStart=/usr/local/bin/etcd ... --data-dir=/var/lib/etcd-from-backup

# or edit manifest
vi /etc/kubernetes/manifests/etcd.yaml
# volumes:
#   - hostPath
#       path: /var/lib/etcd-from-backup

After editing the yaml file, etcd-controlplane will auto restart because this is a static deployment, thus, the kubectl command won't work for a few minutes.
Follow the state with watch

kubectl get pod --all-namespaces --watch

If it hangs in Pending state, try to restart the services.

# restart services
systemctl daemon-reload
service etcd restart
service kube-apiserver start

Check the Liveness and Startup health check kubectl describe pod etcd-controlplane -n kube-system.
If it still doesn't work, try to delete the etcd-controlplane pod (will be auto recreated).

kubectl delete pod etcd-controlplane -n kube-system

Networking in Linux

ip link
ip addr
ip a
ip a add 192.168.1.20/24 dev eth0
route
ip route
ip route add 192.168.1.0/24 via 192.168.2.1
ip route add default via 192.168.2.1
# ip_forward
cat /etc/sysctl.conf
cat /proc/sys/net/ipv4/ip_forward

Net namespaces

ip netns add red
ip netns
ip link
ip netns exec red ip link # runs `ip link` inside `red` namespace
ip -n red link
arp
arp -n

Connecting two namespaces

Create virtual ethernet ports, virtual ethernet connection between namespaces, attach veth to each namespace, add IP to these virtual ports, bring them up

ip link add veth-red type veth peer name veth-blue # veth - virtual ethernet (porta virtual)
ip link set veth-red netns red
ip link set veth-blue netns blue
ip -n red addr add 192.168.15.1 dev veth-red
ip -n blue addr add 192.168.15.2 dev veth-blue
ip -n red link set veth-red up
ip -n blue link set veth-blue up
ip netns exec red ping 192.168.15.2
ip netns exec red arp
ip netns exec blue route
ip -n red link del veth-red # deletes both veth as they are connected

Bridge

Create a bridge, connect all namespaces to the bridge instead of each other individually

# create bridge
ip link add v-net-0 type bridge
ip link set dev v-net-0 up
# create virtual ports connecting namespace to bridge
ip link add veth-red type veth peer name veth-red-br
ip link add veth-blue type veth peer name veth-blue-br
# connect red to bridge
ip link set veth-red netns red
ip link set veth-red-br master v-net-0
# connect blue to bridge
ip link set veth-blue netns blue
ip link set veth-blue-br master v-net-0
# set IPs
ip -n red addr add 192.168.15.1 dev veth-red
ip -n blue addr add 192.168.15.2 dev veth-blue
ip -n red link set veth-red up
ip -n blue link set veth-blue up
# make host see the bridge
ip addr add 192.168.15.5/24 dev v-net-0
# allow `blue` send data to outside
ip netns exec blue ip route add 192.168.1.0/24 via 192.168.15.5
ip netns exec blue ip route add default via 192.168.15.5
# make host work as NAT, to send packages from v-net-0 to other networks
iptables -t nat -A POSTROUTING -s 192.168.15.0/24 -j MASQUERADE
ip netns exec blue ping 8.8.8.8
# forward port 80 from outside into `blue`
iptables -t nat -A PREROUTING --dport 80 --to-destination 192.168.15.2:80 -j DNAT
# view rules
sudo iptables -nvL -t nat # DNAT

Command bridge does many of the steps (create bridge, create virtual port, attach to namespace, attach each other, assing IP, bring interface up, enable NAT)

# create the namespace before `bridge`, get the ${ns_id}
bridge add ${ns_id} /var/run/netns/${ns_id}

CNI - Container Networking Interface

Are standards to work with containers, but Docker doesn't follow these rules, it uses CNM - Container Network Model.
Kubernets creates a Docker container in the none network, and then uses CNI rules to create the connections.

Podman

sudo apt install podman

Enable services for your user (rootless).

systemctl --user enable --now podman.socket
systemctl --user enable --now podman-restart.service

Podman-desktop

Download binary from https://podman-desktop.io/ . The Flatpak won't have root access.
Extract the compressed file.
Change owner and permissions of chrome-sandbox.

sudo chown root:root /path/podman-desktop.d/chrome-sandbox
sudo chmod 4755 /path/podman-desktop.d/chrome-sandbox

Create a symlink for the socket. The original socket only exists when podman-desktop is running, but leave the symlink in place to be ready for use when needed.

sudo mkdir -p /run/user/0/podman
sudo ln -s /run/podman/podman.sock /run/user/0/podman/podman.sock

Now podman-desktop can run rootless or rootful

/path/podman-desktop.d/podman-desktop --no-sandbox

Or rootful

xhost +local:root
sudo -E ./podman-desktop --no-sandbox
xhost -local:root

Config

Rootful config /etc/containers/storage.conf.

[storage]
driver = "overlay"
graphroot = "/var/lib/containers/storage"
runroot = "/run/containers/storage"

Rootless config ~/.config/containers/storage.conf.
Check userId with id -u.

[storage]
driver = "overlay"
graphroot = "~/.local/share/containers/storage"
runroot = "/run/user/1000/containers"

Portainer

Enable podman.socket for your user.

systemctl --user enable --now podman.socket
systemctl --user enable --now podman-restart.service

Create Portainer volume (normally in $HOME/.local/share/containers/storage/volumes, check graphroot above)

podman volume create portainer_data

Download and install the Portainer Server container:

podman run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always --privileged -v /run/user/$(id -u)/podman/podman.sock:/var/run/docker.sock -v portainer_data:/data docker.io/portainer/portainer-ce:lts

Create Portainer Agent

podman run -d \
-p 9001:9001 \
--name portainer_agent \
--restart=always \
--privileged \
-v /run/user/$(id -u)/podman/podman.sock:/var/run/docker.sock \
-v $HOME/.local/share/containers/storage/volumes:/var/lib/docker/volumes \
portainer/agent:alpine-sts

Add -v /:/host \ to get some host management features (check documentation) - (only in rootful?).

Open Portainer web https://localhost:9443.
Add a new enviroment selecting Agent and change settings for the created agent above localhost:9001.

Sometimes the podman.sock gets disabled. Podman-Desktop starts the socket automatically (default config). Check systemctl.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment