Containers stats are based on cgroup. Cgroups are usually set to limit resources.
In kubernetes there are 2 resorces allowd to limit by the user.
Memory:
Kubectl top pod pod-id will provide the of the pod.
Setup K8s + containerd
#cat /etc/containerd/config.toml 1 ↵
[plugins]
[plugins.cri]
[plugins.cri.containerd]
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
[plugins.cri.containerd.runtimes.kata.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration.toml"
[plugins.cri.containerd.runtimes.kata-fc]
runtime_type = "io.containerd.kata-fc.v2"
[plugins.cri.containerd.runtimes.kata-fc.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-fc.toml"
[plugins.cri.containerd.runtimes.kata-qemu]
runtime_type = "io.containerd.kata-qemu.v2"
[plugins.cri.containerd.runtimes.kata-qemu.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
[plugins.cri.containerd.runtimes.kata-nemu]
runtime_type = "io.containerd.kata-nemu.v2"
[plugins.cri.containerd.runtimes.kata-nemu.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-nemu.toml"
apiVersion: v1
kind: Pod
metadata:
name: test-cpumanager-guaranteed-kata-qemu
spec:
runtimeClassName: kata-qemu
restartPolicy: Never
containers:
- name: busy
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 20;done"]
resources:
limits:
cpu: 2
memory: 2Gi # For kata to run
kubectl apply -f test-cpumanager-kata-qemu.yaml
pod/test-cpumanager-guaranteed-kata-qemu created
kubectl top pods 1 ↵
NAME CPU(cores) MEMORY(bytes)
test-cpumanager-guaranteed-kata-qemu 4m 2Mi
systemd-cgtop /kubepods/pode6f3f777-4379-43af-bb06-cde1a442b4d4 -P -k --recursive=true -n 1
Control Group Tasks %CPU Memory Input/s Output/s
/kubepods/pode6f3f777-4379-43af-bb06-cde1a442b4d4 18 - 191.4M - -
Memory overehead for sh workload: 191.4 - 2 = 189.4
Working directory /sys/fs/cgroup/memory/kubepods/pode6f3f777-4379-43af-bb06-cde1a442b4d4:
└─kata-sandbox-ecc78b03e6035ee48c5992f41cf0c873763848a995c76e37f29a5c8f71a7b11c
├─25479 /opt/kata/bin/containerd-shim-kata-v2 -namespace k8s.io -address /run/containerd/containerd.sock -pu
└─25502 /opt/kata/bin/qemu-system-x86_64 -name sandbox-ecc78b03e6035ee48c5992f41cf0c873763848a995c76e37f29a5
Is cgroup memory usage similar to other tools?
smem qemu memory usage:
sudo smem | grep qemu
PID User Command Swap USS PSS RSS
25502 root /opt/kata/bin/qemu-system-x 0 225484 225484 225484
~225MB just for qemu, so not all 100 % similar to it.
Change default memory of qemu to 256 MB. Measure cgroup usage againg.
Control Group Tasks %CPU Memory Input/s Output/s
/kubepods/pod6598fb8f-7e70-488b-a381-8b0792cdbded 18 - 144.4M - -
191.4 vs 144.4M