Skip to content

Instantly share code, notes, and snippets.

containerd[5344]:  level=info msg="StopContainer for "68e4eff25a68f7c434049251f4026904141b26b387e297f5e54e2be2d7381b6a" with timeout 30 (s)"
containerd[5344]:  level=info msg="Stop container "68e4eff25a68f7c434049251f4026904141b26b387e297f5e54e2be2d7381b6a" with signal terminated"
kata[21889]:  level=debug msg="sending request" ID=ac5a7b5ac24812c11253594e1bed082a668d2f3fed829b6d56fe33701e3b1c4b name=grpc.SignalProcessRequest req="container_id:\"68e4eff25a68f7c434049251f4026904141b26b387e297f5e54e2be2d7381b6a\" exec_id:\"68e4eff25a68f7c434049251f4026904141b26b387e297f5e54e2be2d7381b6a\" signal:15 " source=virtcontainers subsystem=kata_agent
containerd[5344]:  level=debug msg="sending request" ID=ac5a7b5ac24812c11253594e1bed082a668d2f3fed829b6d56fe33701e3b1c4b name=grpc.SignalProcessRequest req="container_id:\"68e4eff25a68f7c434049251f4026904141b26b387e297f5e54e2be2d7381b6a\" exec_id:\"68e4eff25a68f7c434049251f4026904141b26b387e297f5e54e2be2d7381b6a\" signal:15 " source=virtcontainers subsystem=kata_agent
k
@egernst
egernst / log using 1.4
Created April 4, 2019 00:01
apply , delete, apply
07:07 eernstworkstation containerd[108474]: time="2019-04-03T17:07:07.574461449-07:00" level=info msg="StopPodSandbox for "ef332c04e14614922647a3f19a2e2bd0852636e1078949014a82c75ca07b343c" returns successfully"
Apr 03 17:07:07 eernstworkstation kubelet[7359]: I0403 17:07:07.575452 7359 qos_container_manager_linux.go:338] [ContainerManager]: Updated QoS cgroup configuration
Apr 03 17:07:07 eernstworkstation kubelet[7359]: I0403 17:07:07.849457 7359 config.go:100] Looking for [api file], have seen map[api:{} file:{}]
Apr 03 17:07:07 eernstworkstation kubelet[7359]: I0403 17:07:07.849518 7359 kubelet.go:1995] SyncLoop (housekeeping)
Apr 03 17:07:07 eernstworkstation kubelet[7359]: I0403 17:07:07.853499 7359 kubelet_pods.go:1073] Killing unwanted pod "dind"
Apr 03 17:07:07 eernstworkstation kubelet[7359]: I0403 17:07:07.853499 7359 kubelet_pods.go:1073] Killing unwanted pod "two-containers-kata"
Apr 03 17:07:07 eernstworkstation kubelet[7359]: I0403 17:07:07.853510 7359 kubelet_pods.go:1073] Kil

In the not-dockershim and not-CRIO normal socket path case, we are handled by the cri stats provider: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/stats/cri_stats_provider.go

The 'magic' happens in the listPodStats function

Looping over each managed container, kubelet calculates the container statics at https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/stats/cri_stats_provider.go#L198, then calculate a running total of the pod usage at https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/stats/cri_stats_provider.go#L200, and then eventually return the results.

Potential issue

Initial potential issue I noticed was that we will run into is at the top of addPodCPUMemoryStats function. The

@egernst
egernst / notes.md
Last active October 28, 2019 21:45
containerd + kubernetes, and making clr-examples do what i want on Bionic

Quick guide for getting Kata+containerd (using v2 shim) up and running super quick on bionic

Installation of Kube stuff on bionic:

Use the following sh:

sudo -E apt install -y curl
sudo bash -c "cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
sudo -E apt install -y curl
sudo bash -c "cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
EOF"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo -E apt update
sudo -E apt install -y kubelet kubeadm kubectl
VERSION="1.2.7"
@egernst
egernst / eviction.md
Last active August 6, 2019 16:55
kubernetes eviction study

Eviction handling

Kubelet manages eviction, which is carried out at pod-granularity on a node. The kubelet ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests, then by Priority, and then by the consumption of the starved compute resource relative to the Pods’ scheduling requests.

Of note for Pod Overhead is the comparision of requested resources versus utilization of particular resource. The sum of requests is compared against the sum of container utilization, for each pod.

Eviction is handled by an Evicition Manager.

NewManager is passed a summaryProvider, which is a part of the StatsProvider created for Kubelet. In our case, it should be a New CRI Stats Provider (see ~/go/src/k8s.io/kubernetes/pkg/kubelet/server/stats/summary.go for analyzer?)

@egernst
egernst / hack-k8s.md
Last active November 3, 2019 19:55
hacking k8s

kubeadm-config.yaml:

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# Allowing for CPU pinning and isolation in case of guaranteed QoS class
-featureGates:
-  PodOverhead: true
cpuManagerPolicy: static
systemReserved:
@egernst
egernst / setit.md
Last active August 14, 2019 23:27
set performance governer

As root:

for c in {0..87}; do echo performance > /sys/devices/system/cpu/cpu$c/cpufreq/scaling_governor; done

Verify:

cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
@egernst
egernst / fc-jenkins-job.sh
Created September 11, 2019 19:55
snippet of firecracker ci job
#!/bin/bash
set -e
export ghprbPullId
export ghprbTargetBranch
export KATA_DEV_MODE="false"
export KATA_HYPERVISOR="firecracker"
export CI="true"
export CI_JOB="FIRECRACKER"
@egernst
egernst / fail.md
Last active November 5, 2019 00:43
scraping stuff
$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/busybox-two" | jq ' '
{
  "kind": "PodMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "busybox-two",