Skip to content

Instantly share code, notes, and snippets.

@tuannvm
Last active October 31, 2024 13:46
Show Gist options
  • Save tuannvm/4e1bcc993f683ee275ed36e67c30ac49 to your computer and use it in GitHub Desktop.
Save tuannvm/4e1bcc993f683ee275ed36e67c30ac49 to your computer and use it in GitHub Desktop.
#Helm #Kubernetes #cheatsheet, happy helming!

Kubernetes cheatsheet

Getting Started

  • Fault tolerance
  • Rollback
  • Auto-healing
  • Auto-scaling
  • Load-balancing
  • Isolation (sandbox)

Sample yaml

apiVersion: <>
kind: <>
metadata:
  name: <>
  labels:
    ...
  annotations:
    ...
spec:
  containers:
    ...
  initContainers:
    ...
  priorityClassName: <>

Workflow

Credit: https://www.reddit.com/user/__brennerm/

  • (kube-scheduler, controller-manager, etcd) --443--> API Server

  • API Server --10055--> kubelet

    • non-verified certificate
    • MITM
    • Solution:
      • set kubelet-certificate-authority
      • ssh tunneling
  • API server --> (nodes, pods, services)

    • Plain HTTP (unsafe)

Physical components

Master

  • API Server (443)
  • kube-scheduler
  • controller-manager
    • cloud-controller-manager
    • kube-controller-manager
  • etcd

Other components talk to API server, no direct communication

Node

  • Kubelet

  • Container Engine

    • CRI
      • The protocol which used to connect between Kubelet & container engine
  • Kube-proxy

Everything is an object - persistent entities

  • maintained in etcd, identified using

    • names: client-given
    • UIDs: system-generated
  • Both need to be unique

  • three management methods

    • Imperative commands (kubectl)
    • Imperative object configuration (kubectl + yaml)
      • repeatable
      • observable
      • auditable
    • Declarative object configuration (yaml + config files)
      • Live object configuration
      • Current object configuration file
      • Last-applied object configuration file
      Node Capacity
---------------------------
| kube-reserved             |
|---------------------------|
| system-reserved           |
| ------------------------- |
| eviction-threshold        |
| ------------------------- |
|                           |
| allocatable               |
| (available for pods)      |
|                           |
|                           |
---------------------------

Namespaces

  • Three pre-defined

    • default
    • kube-system
    • kube-public: auto-readable by all users
  • Objects without namespaces

    • Nodes
    • PersistentVolumes
    • Namespaces

Labels

  • key / value
  • loose coupling via selectors
  • need not be unique

ClusterIP

  • Independent of lifespan of any backend pod
  • Service object has a static port assigned to it

Controller manager

  • ReplicaSet, deployment, daemonset, statefulSet
  • Actual state <-> desired state
  • reconciliation loop

Kube-scheduler

  • nodeSelector
  • Affinity & Anti-Affinity
    • Node
      • Steer pod to node
    • Pod
      • Steer pod towards or away from pods
  • Taints & tolerations (anti-affinity between node and pod!)
    • Base on predefined configuration (env=dev:NoSchedule)
      ...
      tolerations:
      - key: "dev"
        operator: "equal"
        value: "env"
        effect: NoSchedule
      ...
    • Base on node condition (alpha in v1.8)
      • taints added by node controller

Pod

kubectl run name --image=<image>

What's available inside the container?

  • File system
    • Image
    • Associated Volumes
      • ordinary
      • persistent
    • Container
      • Hostname
    • Pod
      • Pod name
      • User-defined envs
    • Services
      • List of all services

Access with:

  • Symlink (important):

    • /etc/podinfo/labels
    • /etc/podinfo/annotations
  • Or:

volumes:
  - name: podinfo
    downwardAPI:
      items:
        - path: "labels"
          fieldRef:
            fieldPath: metadata.labels
        - path: "annotations"
          fieldRef:
            fieldPath: metadata.annotations

Status

  • Pending
  • Running
  • Succeeded
  • Failed
  • Unknown

Probe

  • Liveness
    • Failed? Restart policy applied
  • Readiness
    • Failed? Removed from service

Pod priorities

  • available since 1.8
  • PriorityClass object
  • Affect scheduling order
    • High priority pods could jump the queue
  • Preemption
    • Low priority pods could be pre-empted to make way for higher one (if no node is available for high priority)
    • These preempted pods would have a graceful termination period

Multi-Container Pods

  • Share access to memory space
  • Connect to each other using localhost
  • Share access to the same volume
  • entire pod is host on the same node
  • all in or nothing
  • no auto healing or scaling

Init containers

  • run before app containers
  • always run to completion
  • run serially

Lifecycle hooks

  • PostStart
  • PreStop (blocking)

Handlers:

  • Exec
  • HTTP
...
spec:
  containers:
    lifecycle:
      postStart:
        exec:
          command: <>
      preStop:
        http:
...

Could invoke multiple times

Quality of Service (QoS)

When Kubernetes creates a Pod it assigns one of these QoS classes to the Pod:

  • Guaranteed (all containers have limits == requests)

If a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit. Similarly, if a Container specifies its own cpu limit, but does not specify a cpu request, Kubernetes automatically assigns a cpu request that matches the limit.

  • Burstable (at least 1 has limits or requests)
  • BestEffort (no limits or requests)

PodPreset

You can use a podpreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time. This task shows some examples on using the PodPreset resource

apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
  name: allow-database
spec:
  selector:
    matchLabels:
      role: frontend
  env:
    - name: DB_PORT
      value: "6379"
  volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
    - name: cache-volume
      emptyDir: {}

ReplicaSet

Features:

  • Scaling and healing
  • Pod template
  • number of replicas

Components:

  • Pod template

  • Pod selector (could use matchExpressions)

  • Label of replicaSet

  • Number of replica

  • Could delete replicaSet without its pods using --cascade =false

  • Isolating pods from replicaSet by changing its labels

Deployments

  • versioning and rollback

  • Contains spec of replicaSet within it

  • advanced deployment

  • blue-green

  • canary

  • Update containers --> new replicaSet & new pods created --> old RS still exists --> reduced to zero

  • Every change is tracked

  • Append --record in kubectl to keep history

  • Update strategy

    • Recreate
      • Old pods would be killed before new pods come up
    • RollingUpdate
      • progressDeadlineSeconds
      • minReadySeconds
      • rollbackTo
      • revisionHistoryLimit
      • paused
        • spec.Paused
  • kubectl rollout undo deployment/<> --to-revision=<>

  • kubectl rollout statua deployment/<>

  • kubectl set image deployment/<> <>=<>:<>

  • kubectl rollout resume/pause <>

ReplicationController

  • RC = ( RS + deployment ) before
  • Obsolete

DaemonSet

  • Ensure all nodes run a copy of pod
  • Cluster storage, log collection, node monitor ...

StatefulSet

  • Maintains a sticky identity
  • Not interchangeable
  • Identifier maintains across any rescheduling

Limitation

  • volumes must be pre-provisioned
  • Deleting / Scaling will not delete associated volumes

Flow

  • Deployed 0 --> (n-1)
  • Deleted (n-1) --> 0 (successor must be completely shutdown before proceed)
  • Must be all ready and running before scaling happens

Job (batch/v1)

  • Non-parallel jobs
  • Parallel jobs
    • Fixed completion count
      • job completes when number of completions reaches target
    • With work queue
      • requires coordination
  • Use spec.activeDeadlineSeconds to prevent infinite loop

Cronjob

  • Job should be idempotent

Horizontal pod autoscaler

  • Targets: replicaControllers, deployments, replicaSets
  • CPU or custom metrics
  • Won't work with non-scaling objects: daemonSets
  • Prevent thrashing (upscale/downscale-delay)

Services

Credit: https://www.reddit.com/user/__brennerm/

  • Logical set of backend pods + frontend

  • Frontend: static IP + port + dns name

  • Backend: set of backend pods (via selector)

  • Static IP and networking.

  • Kube-proxy route traffic to VIP.

  • Automatically create endpoint based on selector.

  • CluterIP

  • NodePort

    • external --> NodeIP + NodePort --> kube-proxy --> ClusterIP
  • LoadBalancer

    • Need to have cloud-controller-manager
      • Node controller
      • Route controller
      • Service controller
      • Volume controller
    • external --> LB --> NodeIP + NodePort --> kube-proxy --> ClusterIP
  • ExternalName

    • Can only resolve with kube-dns
    • No selector

Service discovery

  • SRV record for named port
    • port-name.port-protocol.service-name.namespace.svc.cluster.local
  • Pod domain
    • pod-ip-address.namespace.pod.cluster.local
    • hostname is metadata.name

spec.dnsPolicy

  • default
    • inherit node's name resolution
  • ClusterFirst
    • Any DNS query that does not match the configured cluster domain suffix, such as “www.kubernetes.io”, is forwarded to the upstream nameserver inherited from the node
  • ClusterFirstWithHostNet
    • if host network = true
  • None (since k8s 1.9)
    • Allow custom dns server usage

Headless service

  • with selector? --> associate with pods in cluster
  • without selector? --> forward to externalName

Could specify externalIP to service

Volumes

Credit: https://www.reddit.com/user/__brennerm/

Lifetime longer than any containers inside a pod.

4 types:

  • configMap

  • emptyDir

    • share space / state across containers in same pod
    • containers can mount at different times
    • pod crash --> data lost
    • container crash --> ok
  • gitRepo

  • secret

    • store on RAM
  • hostPath

Persistent volumes

Role-Based Access Control (RBAC)

Credit: https://www.reddit.com/user/__brennerm/

  • Role
    • Apply on namespace resources
  • ClusterRole
    • cluster-scoped resources (nodes,...)
    • non-resources endpoint (/healthz)
    • namespace resources across all namespaces

Custom Resource Definitions

CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: crontabs.stable.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: stable.example.com
  # version name to use for REST API: /apis/<group>/<version>
  version: v1
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - ct
    # categories is a list of grouped resources the custom resource belongs to.
    categories:
    - all
  validation:
   # openAPIV3Schema is the schema for validating custom objects.
    openAPIV3Schema:
      properties:
        spec:
          properties:
            cronSpec:
              type: string
              pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
            replicas:
              type: integer
              minimum: 1
              maximum: 10
  # subresources describes the subresources for custom resources.
  subresources:
    # status enables the status subresource.
    status: {}
    # scale enables the scale subresource.
    scale:
      # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.
      specReplicasPath: .spec.replicas
      # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.
      statusReplicasPath: .status.replicas
      # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.
      labelSelectorPath: .status.labelSelector

Notes

Basic commands

# show current context
kubectl config current-context

# get specific resource
kubectl get (pod|svc|deployment|ingress) <resource-name>

# Get pod logs
kubectl logs -f <pod-name>

# Get nodes list
kubectl get no -o custom-columns=NAME:.metadata.name,AWS-INSTANCE:.spec.externalID,AGE:.metadata.creationTimestamp

# Run specific command | Drop to shell
kubectl exec -it <pod-name> <command>

# Describe specific resource
kubectl describe (pod|svc|deployment|ingress) <resource-name>

# Set context
kubectl config set-context $(kubectl config current-context) --namespace=<namespace-name>

# Run a test pod
kubectl run -it --rm --generator=run-pod/v1 --image=alpine:3.6 tuan-shell -- sh
  • from @so0k link

  • access dashboard

# bash
kubectl -n kube-system port-forward $(kubectl get pods -n kube-system -o wide | grep dashboard | awk '{print $1}') 9090

# fish
kubectl -n kube-system port-forward (kubectl get pods -n kube-system -o wide | grep dashboard | awk '{print $1}') 9090

jsonpath

From link

{
  "kind": "List",
  "items":[
    {
      "kind":"None",
      "metadata":{"name":"127.0.0.1"},
      "status":{
        "capacity":{"cpu":"4"},
        "addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}]
      }
    },
    {
      "kind":"None",
      "metadata":{"name":"127.0.0.2"},
      "status":{
        "capacity":{"cpu":"8"},
        "addresses":[
          {"type": "LegacyHostIP", "address":"127.0.0.2"},
          {"type": "another", "address":"127.0.0.3"}
        ]
      }
    }
  ],
  "users":[
    {
      "name": "myself",
      "user": {}
    },
    {
      "name": "e2e",
      "user": {"username": "admin", "password": "secret"}
    }
  ]
}
Function Description Example Result
text the plain text kind is {.kind} kind is List
@ the current object {@} the same as input
. or [] child operator {.kind} or {['kind']} List
.. recursive descent {..name} 127.0.0.1 127.0.0.2 myself e2e
* wildcard. Get all objects {.items[*].metadata.name} [127.0.0.1 127.0.0.2]
[start:end :step] subscript operator {.users[0].name} myself
[,] union operator {.items[*]['metadata.name', 'status.capacity']} 127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]
?() filter {.users[?(@.name=="e2e")].user.password} secret
range, end iterate list {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]
'' quote interpreted string {range .items[*]}{.metadata.name}{'\t'}{end} 127.0.0.1 127.0.0.2

Below are some examples using jsonpath:

$ kubectl get pods -o json
$ kubectl get pods -o=jsonpath='{@}'
$ kubectl get pods -o=jsonpath='{.items[0]}'
$ kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
$ kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'

Resource limit

CPU

The CPU resource is measured in cpu units. One cpu, in Kubernetes, is equivalent to:

  • 1 AWS vCPU
  • 1 GCP Core
  • 1 Azure vCore
  • 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

Memory

The memory resource is measured in bytes. You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent approximately the same value:

128974848, 129e6, 129M , 123Mi

Chapter 13. Integrating storage solutions and Kubernetes

  • External service without selector (access with external-database.svc.default.cluster endpoint)
kind: Service
apiVersion: v1
metadata:
  name: external-database
spec:
  type: ExternalName
  externalName: "database.company.com
  • external service with IP only
kind: Service
apiVersion: v1
metadata:
  name: external-ip-database
---
kind: Endpoints
apiVersion: v1
metadata:
  name: external-ip-database
subsets:
  - addresses:
    - ip: 192.168.0.1
    ports:
    - port: 3306

Downward API

The following information is available to containers through environment variables and downwardAPI volumes:

Information available via fieldRef:

  • spec.nodeName - the node’s name
  • status.hostIP - the node’s IP
  • metadata.name - the pod’s name
  • metadata.namespace - the pod’s namespace
  • status.podIP - the pod’s IP address
  • spec.serviceAccountName - the pod’s service account name
  • metadata.uid - the pod’s UID
  • metadata.labels[''] - the value of the pod’s label (for example, metadata.labels['mylabel']); available in Kubernetes 1.9+
  • metadata.annotations[''] - the value of the pod’s annotation (for example, metadata.annotations['myannotation']); available in Kubernetes 1.9+
  • Information available via resourceFieldRef:
  • A Container’s CPU limit
  • A Container’s CPU request
  • A Container’s memory limit
  • A Container’s memory request

In addition, the following information is available through downwardAPI volume fieldRef:

  • metadata.labels - all of the pod’s labels, formatted as label-key="escaped-label-value" with one label per line
  • metadata.annotations - all of the pod’s annotations, formatted as annotation-key="escaped-annotation-value" with one annotation per line

Labs

Guaranteed Scheduling For Critical Add-On Pods

See link

  • Marking pod as critical when using Rescheduler. To be considered critical, the pod has to:
    • Run in the kube-system namespace (configurable via flag)
    • Have the scheduler.alpha.kubernetes.io/critical-pod annotation set to empty string
    • Have the PodSpec’s tolerations field set to [{"key":"CriticalAddonsOnly", "operator":"Exists"}].

The first one marks a pod a critical. The second one is required by Rescheduler algorithm.

  • Marking pod as critical when priorites are enabled. To be considered critical, the pod has to:
    • Run in the kube-system namespace (configurable via flag)
    • Have the priorityClass set as system-cluster-critical or system-node-critical, the latter being the highest for entire cluster
    • scheduler.alpha.kubernetes.io/critical-pod annotation set to empty string(This will be deprecated too).

Set command or arguments via env

env:
- name: MESSAGE
  value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]

Helm CheatSheet

Get Started


Struture

.
├── Chart.yaml --> metadata info
├── README.md
├── requirements.yaml --> define dependencies
├── templates
│   ├── spark-master-deployment.yaml --> configuration with template supported
│   ├── spark-worker-deployment.yaml
│   └── spark-zeppelin-deployment.yaml
│   └── NOTES.txt --> display when run "helm chart"
│   └── _helpers.tpl --> template handler
└── values.yaml --> variable list, will be interpolated on templates file during deployment
│
└── charts
    ├── apache/
        ├── Chart.yaml
  • Chart.yaml
  name: The name of the chart (required)
  version: A SemVer 2 version (required)
  description: A single-sentence description of this project (optional)
  keywords:
    - A list of keywords about this project (optional)
  home: The URL of this project's home page (optional)
  sources:
    - A list of URLs to source code for this project (optional)
  maintainers: # (optional)
    - name: The maintainer's name (required for each maintainer)
      email: The maintainer's email (optional for each maintainer)
  engine: gotpl # The name of the template engine (optional, defaults to gotpl)
  icon: A URL to an SVG or PNG image to be used as an icon (optional).
  appVersion: The version of the app that this contains (optional). This needn't be SemVer.
  deprecated: Whether or not this chart is deprecated (optional, boolean)
  tillerVersion: The version of Tiller that this chart requires. This should be expressed as a SemVer range: ">2.0.0" (optional)
  • requirements.yaml

Adding an alias for a dependency chart would put a chart in dependencies using alias as name of new dependency. Condition - The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent's values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value. Only the first valid path found in the list is evaluated and if no paths exist then the condition has no effect. Tags - The tags field is a YAML list of labels to associate with this chart. In the top parent's values, all charts with tags can be enabled or disabled by specifying the tag and a boolean value. Conditions (when set in values) always override tags

  dependencies:
  - name: apache
    version: 1.2.3
    repository: http://example.com/charts
    alias: new-subchart-1
    condition: subchart1.enabled, global.subchart1.enabled
        tags:
          - front-end
          - subchart1

  - name: mysql
    version: 3.2.1
    repository: http://another.example.com/charts
    alias: new-subchart-2
    condition: subchart2.enabled,global.subchart2.enabled
        tags:
          - back-end
          - subchart1

General Usage

  helm list --all
  helm repo (list|add|update)
  helm search
  helm inspect <chart-name>
  hem install --set a=b -f config.yaml <chart-name> -n <release-name> # --set take precedented, merge into -f
  helm status <deployment-name>
  helm delete <deployment-name>
  helm inspect values <chart-name>
  helm upgrade -f config.yaml <deployment-name> <chart-name>
  helm rollback <deployment-name> <version>

  helm create <chart-name>
  helm package <chart-name>
  helm lint <chart-name>

  helm dep up <chart-name> # update dependency
  helm get manifest <deployment-name> # prints out all of the Kubernetes resources that were uploaded to the server
  helm install --debug --dry-run <deployment-name> # it will return the rendered template to you so you can see the output
  • --set outer.inner=value is translated into this:
  outer:
  inner: value
  • --set servers[0].port=80,servers[0].host=example:
  servers:
  - port: 80
    host: example
  • --set name={a, b, c} translates to:
  name:
  - a
  - b
  - c
  • --set name=value1,value2:
  name: "value1,value2"
  • --set nodeSelector."kubernetes.io/role"=master
  nodeSelector:
  kubernetes.io/role: master
  • --set livenessProbe.exec.command=[cat,docroot/CHANGELOG.txt] --set livenessProbe.httpGet=null
livenessProbe:
-  httpGet:
-    path: /user/login
-    port: http
  initialDelaySeconds: 120
+  exec:
+    command:
+    - cat
+    - docroot/CHANGELOG.txt
  • --timeout
  • --wait
  • --no-hooks
  • --recreate-pods

Template

Values that are supplied via a values.yaml file (or via the --set flag) are accessible from the .Values object in a template

Release.Name:
Release.Time:
Release.Namespace: The namespace the chart was released to.
Release.Service: The service that conducted the release. Usually this is Tiller.
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.IsInstall: This is set to true if the current operation is an install.
Release.Revision: The revision number. It begins at 1, and increments with each helm upgrade.
Chart: The contents of the Chart.yaml. Thus, the chart version is obtainable as "Chart.Version" and the maintainers are in "Chart.Maintainers".
Files: Files can be accessed using {{index .Files "file.name"}} or using the "{{.Files.Get name}}" or "{{.Files.GetString name}}" functions. You can also access the contents of the file as []byte using "{{.Files.GetBytes}}"
Capabilities: "({{.Capabilities.KubeVersion}}", Tiller "({{.Capabilities.TillerVersion}}", and the supported Kubernetes API versions "({{.Capabilities.APIVersions.Has "batch/v1")"

{{.Files.Get config.ini}}
{{.Files.GetBytes}} useful for things like images

{{.Template.Name}}
{{.Template.BasePath}}
  • default value
{{default "minio" .Values.storage}}

//same
{{ .Values.storage | default "minio" }}
  • put a quote outside
heritage: {{.Release.Service | quote }}

# same result
heritage: {{ quote .Release.Service }}
  • global variable
global:
  app: MyWordPress

// could be access as "{{.Values.global.app}}"
  • Includes a template called mytpl.tpl, then lowercases the result, then wraps that in double quotes
value: {{include "mytpl.tpl" . | lower | quote}}
  • required function declares an entry for .Values.who is required, and will print an error message when that entry is missing
value: {{required "A valid .Values.who entry required!" .Values.who }}
  • The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes
kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]
  • The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation

  • In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file. So by convention, helper templates and partials are placed in a _helpers.tpl file.

Hooks

Read more

  • include these annotation inside hook yaml file, for e.g templates/post-install-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    # This is what defines this resource as a hook. Without this line, the
    # job is considered part of the release.
    "helm.sh/hook": post-install, post-upgrade
    "helm.sh/hook-weight": "-5"

Chart Repository

Read more

Signing

Read more

Test

Read more

Flow Control


If/Else

{{ if PIPELINE }}
  # Do something
{{ else if OTHER PIPELINE }}
  # Do something else
{{ else }}
  # Default case
{{ end }}

data:
  myvalue: "Hello World"
  drink: {{ .Values.favorite.drink | default "tea" | quote }}
  food: {{ .Values.favorite.food | upper | quote }}
  {{- if eq .Values.favorite.drink "lemonade" }}
  mug: true
  {{- end }} # notice the "-" in the left, if will help eliminate newline before variable

With

with can allow you to set the current scope (.) to a particular object

data:
  myvalue: "Hello World"
  {{- with .Values.favorite }}
  drink: {{ .drink | default "tea" | quote }}
  food: {{ .food | upper | quote }}
  {{- end }} # instead of writing ".Values.favorite.drink"

Inside of the restricted scope, you will not be able to access the other objects from the parent scope

Range

# predefined variable
pizzaToppings:
  - mushrooms
  - cheese
  - peppers
  - onions

toppings: |-
    {{- range $i, $val := .Values.pizzaTopping }}
    - {{ . | title | quote }}  # upper first character, then quote
    {{- end }}

sizes: |-
    {{- range tuple "small" "medium" "large" }}
    - {{ . }}
    {{- end }} # make a quick list

Variables

It follows the form $name. Variables are assigned with a special assignment operator: :=

data:
  myvalue: "Hello World"
  {{- $relname := .Release.Name -}}
  {{- with .Values.favorite }}
  drink: {{ .drink | default "tea" | quote }}
  food: {{ .food | upper | quote }}
  release: {{ $relname }}
  {{- end }}

# use variable in range
 toppings: |-
    {{- range $index, $topping := .Values.pizzaToppings }}
      {{ $index }}: {{ $topping }}
    {{- end }}

#toppings: |-
#      0: mushrooms
#      1: cheese
#      2: peppers
#      3: onions

{{- range $key,$value := .Values.favorite }}
  {{ $key }}: {{ $value }}
  {{- end }} # instead of specify the key, we can actually loop through the values.yaml file and print values

There is one variable that is always global - $ - this variable will always point to the root context

...
labels:
    # Many helm templates would use `.` below, but that will not work,
    # however `$` will work here
    app: {{ template "fullname" $ }}
    # I cannot reference .Chart.Name, but I can do $.Chart.Name
    chart: "{{ $.Chart.Name }}-{{ $.Chart.Version }}"
    release: "{{ $.Release.Name }}"
    heritage: "{{ $.Release.Service }}"
...

Named Templates

template names are global

# _helpers.tpl
{{/* Generate basic labels */}}
{{- define "my_labels" }}
  labels:
    generator: helm
    date: {{ now | htmlDate }}
    version: {{ .Chart.Version }}
    name: {{ .Chart.Name }}
{{- end }}

When a named template (created with define) is rendered, it will receive the scope passed in by the template call.

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-configmap
  {{- template "my_labels" . }} # Notice the final dot, it will pass the global scope inside template file. Without it version & name will not be �generated.
  {{- include "my_labels" . | indent 2 }} # similar to "template" directive, have the ability to control indentation

referable to use include over template. Because template is an action, and not a function, there is no way to pass the output of a template call to other functions; the data is simply inserted inline.

Files inside Templates

# file located at parent folder
# config1.toml: |-
#   message = config 1 here
# config2.toml: |-
#   message = config 2 here
# config3.toml: |-
#   message = config 3 here

data:
  {{- $file := .Files }} # set variable
  {{- range tuple "config1.toml" "config2.toml" "config3.toml" }} # create list
  {{ . }}: |- # config file name
    {{ $file.Get . }} # get file's content
  {{- end }}

Glob-patterns & encoding

apiVersion: v1
kind: ConfigMap
metadata:
  name: conf
data:
+{{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
---
apiVersion: v1
kind: Secret
metadata:
  name: very-secret
type: Opaque
data:
+{{ (.Files.Glob "bar/*").AsSecrets | indent 2 }}

+token: |-
+  {{ .Files.Get "config1.toml" | b64enc }}

YAML reference

# Force type
age: !!str 21
port: !!int "80"

# Fake first line to preserve integrity
coffee: | # �no strip
  # Commented first line
         Latte
  Cappuccino
  Espresso

coffee: |- # strip off trailing newline
  Latte
  Cappuccino
  Espresso

coffee: |+ # preserve trailing newline
  Latte
  Cappuccino
  Espresso


another: value

myfile: | # insert static file
{{ .Files.Get "myfile.txt" | indent 2 }}

coffee: > �# treat as one long line
  Latte
  Cappuccino
  Espresso
@tuannvm
Copy link
Author

tuannvm commented Jul 12, 2017

Internal ELB configuration for Service:

kind: Service
apiVersion: v1
metadata:
    name: someService
    annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"

AWS Service annotations

  • service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval (in minutes)
  • service.beta.kubernetes.io/aws-load-balancer-access-log-enabled (true|false)
  • service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name
  • service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix
  • service.beta.kubernetes.io/aws-load-balancer-backend-protocol (http|https|ssl|tcp)
  • service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled (true|false)
  • service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout (in seconds)
  • service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout (in seconds, default 60)
  • service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled (true|false)
  • service.beta.kubernetes.io/aws-load-balancer-internal: '0.0.0.0/0'
  • service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
  • service.beta.kubernetes.io/aws-load-balancer-ssl-cert (IAM or ACM ARN)
  • service.beta.kubernetes.io/aws-load-balancer-ssl-ports (default '*')

@tuannvm
Copy link
Author

tuannvm commented Jul 17, 2017

  • Copy object (secret/configmap) to different namespace:
kubectl get secret gitlab-registry --namespace=revsys-com --export -o yaml |\
   kubectl apply --namespace=devspectrum-dev -f -
  • check logs of multiple pods:
for i in (kpods | grep drone-agent | awk '{print $2}' ); echo $i ; kubectl logs --tail=10 $i; end
  • Get token from service account:
kubectl get sa <service-account-name> -o json | jq -r ".secrets[0].name" | xargs kubectl get secret -o json | jq -r ".data.token" | base64 --decode | pbcopy
  • Get Certificate authority:
# See https://github.com/stedolan/jq/issues/204#issuecomment-27089261
kubectl get sa <service-account-name> -o json | jq -r ".secrets[0].name" | xargs kubectl get secret -o json | jq '.data["ca.crt"]' | pbcopy
  • Start minikube
minikube start --kubernetes-version v1.9.11 --cpus 4 --memory 8092

Access service from different namespace:

<service-name>.<namespace>.svc.cluster.local
  • Check all supported api versions:
kubectl api-versions
  • Convenient one-line script to check reachability:
for from in "foo" "bar" "legacy"; 
  do for to in "foo" "bar" "legacy"; 
    do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; 
  done; 
done
  • Get default token:
TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default-token | cut -f1 -d ' ' | head -1) | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
  • Get raw http response:
kubectl get --raw /apis/batch/v1

@tuannvm
Copy link
Author

tuannvm commented Jul 18, 2017

  • secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "{{ template "fullname" . }}"
  labels:
    heritage: {{ .Release.Service | quote }}
    release: {{ .Release.Name | quote }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    app: "{{ template "fullname" . }}"
type: Opaque
data:
  {{- range $key, $value :=  .Values.secrets }}
  {{ $key }}: {{ $value | b64enc | quote}}
  {{- end }}
  • values.yaml:
secrets:
  aws-access-key-id: ""
  aws-secret-access-key: ""
  • inside the templates:
env:
        {{- range $key, $value :=  .Values.config }}
        - name: {{ $key | upper | replace "-" "_" }}
          value: {{ $value | quote }}
        {{- end}}
        {{- range $key, $value :=  .Values.secrets }}
        - name: {{ $key | upper | replace "-" "_" }}
          valueFrom:
            secretKeyRef:
              name: {{ template "fullname" $ }}
              key: {{ $key }}
        {{- end }}

@tuannvm
Copy link
Author

tuannvm commented Jul 24, 2017

Upgrade with setting new value and retain the old one:

helm upgrade <relase-name> <chart-path>/ --reuse-values --set <key>=<value>

@tuannvm
Copy link
Author

tuannvm commented Jul 27, 2017

  • Add checksum annotation to allow deployment upgrade while configmap/secret changed:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "fullname" . }}
  labels:
    heritage: {{ .Release.Service | quote }}
    release: {{ .Release.Name | quote }}
    chart: "{{.Chart.Name}}-{{.Chart.Version}}"
    app: "{{ template "fullname" . }}"
  annotations:
    checksum/config-map: {{ include (print $.Chart.Name "/templates/secret.yaml") . | sha256sum }}

@tuannvm
Copy link
Author

tuannvm commented Jul 31, 2017

  • Drain kubernetes node:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data. --force 

@tuannvm
Copy link
Author

tuannvm commented Jul 31, 2017

  • Create kubernetes user link

  • Delete evicted pod:

kubectl get po --all-namespaces -o json | \
jq  '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | 
"kubectl delete po \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
  • check init container logs:
kubectl logs <pod-name> -c <init-container-name>

@tuannvm
Copy link
Author

tuannvm commented Sep 22, 2017

  • run in non-exit shell to debug:
        command: [ "/bin/sh", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]

@tuannvm
Copy link
Author

tuannvm commented Nov 16, 2017

@tuannvm
Copy link
Author

tuannvm commented Dec 13, 2017

jq with dash -: jqlang/jq#38 (comment)

@tuannvm
Copy link
Author

tuannvm commented Jan 11, 2018

sample deployment:

apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
      # generated from the deployment name
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

@bikranz4u
Copy link

Thanks for the Cheat Sheet.

@nalinguptalinux
Copy link

Thanks

@prashanth-sams
Copy link

Nice! If you wish to see few more in details...
https://devopsqa.wordpress.com/2020/01/29/helm-cli-cheatsheet/

@sedkis
Copy link

sedkis commented Apr 23, 2020

Thank you

@tuannvm
Copy link
Author

tuannvm commented Apr 27, 2020

@tuannvm
Copy link
Author

tuannvm commented Sep 1, 2020

http://masterminds.github.io/sprig/defaults.html#ternary
http://masterminds.github.io/sprig/integer_slice.html#untilStep

# if env == qa --> $count = envCount, or else
{{ $count := ternary .Values.envCount 1 (eq "qa" .Values.env) }}
# generate list from 0 to $count
{{- range $var := untilStep 0 (int $count) 1 }}

Define and use new variable

{{ $foo := print .Values.bar "-" .Values.pub }}
{{- if eq $foo .Values.disco }}
{{- end }}

@tuannvm
Copy link
Author

tuannvm commented Nov 11, 2020

Use variable in values.yaml

# values.yaml

foo:
  foo1: bar1
  foo2: {{ .Release.Namespace }}
# deployment.yaml

{{ tpl (toYaml .Values.foo) . | indent 2 }}

@tuannvm
Copy link
Author

tuannvm commented Apr 11, 2021

  • Check API access:
kubectl auth can-i create deployments --namespace dev
kubectl auth can-i list secrets --namespace dev --as dave
  • Get list of nodes created before / after specific data
kubectl get node -o json | jq -r '.items[] | select (.metadata.creationTimestamp <= "2021-04-13") | .metadata.name'
  • Get pod with x restart counts:
kubectl get pod -l <label> -o json | jq -r '.items[] | select (.status.containerStatuses != null) | {container_status: .status.containerStatuses[], name: .metadata.name} | select (.container_status.restartCount > <restart_count>) | .name'

@tuannvm
Copy link
Author

tuannvm commented Sep 27, 2021

  • Get list of images used in deployments:
kubectl get deployment -o json | jq -r '.items[] | .spec.template.spec.containers[].image' | sort | uniq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment