Skip to content

Instantly share code, notes, and snippets.

@federico-garcia
Last active December 6, 2021 20:43
Show Gist options
  • Save federico-garcia/7f363bdc866d56ad4dbeb1a42082458f to your computer and use it in GitHub Desktop.
Save federico-garcia/7f363bdc866d56ad4dbeb1a42082458f to your computer and use it in GitHub Desktop.
kubernetes tooling, app development and deployment

Concepts

Cloud Native apps = cloud + orchestration + containers

Local k8s cluster

Installing a k8s cluster on your machine

-- install k8s cli-tool kubectl for sending commands to k8s API server
-- you may want to alias kubectl to something shorter like k. Add alias k=kubectl 
-- in your bash|zsh profile
brew install kubernetes-cli
-- install a local k8s cluster, one instance. It requires VirtualBos or other virtualization software
brew install minikube

Starting and stopping local cluster

minikube start
minikube status
minikube stop

Networking

When you create a service object, it could be of any of the following types: ClusterIP, NodePort or LoadBalancer

Port-forwarding the traffic on your local env to the containerized app

-- this is useful for debugging apps that are not available outside the cluster (exposed via ClusterIP, default)
kubectl port-forward <pod-name> <local-port>:<container-port>
kubectl port-forward hello-world-8559477f6-r5ksm 3000:8080

Exposing services/apps using NodePort

-- With NodePort, the service gets assigned a ClusterIP + it's accessible on the Cluster's nodes IP + port(30000–32767)
-- Take into account you need to get the IP from one of the Cluster nodes
minikube ip
-- to get the Port assigned to the service (Ports column)
kutectl get service
open http://<node-ip>:<node-port>
open http://192.168.64.2:30143
-- in minikube, to open the service's URL just type:
open $(minikube service <service-name> --url)

Exposing services/apps using LoadBalancer

-- This is exactly like NodePort + an external IP to get access to the service/app. It works only on cloud environments.
-- In your local env, the external IP keeps in a pending status
-- You cannot configure a LoadBalancer to terminate HTTPS traffic, virtual hosts or path-based routing
-- each service you expose with a LoadBalancer will get its own IP address, which can get expensive!

Set up Ingress on Minikube with the NGINX Ingress Controller

-- Ingress is not a type of service, it's in front of multiple service and act as a reverse-proxy
minikube addons enable ingress
-- verify the nginx ingress controller is running
kubectl get pods -n kube-system

Create a file to describe the nginx ingress object (indentation is quite important, it causes validation errors). Depending on the ingress controller you want to create, you need to add different annotations.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - host: hello-world.info
      http:
        paths:
          - path: /
            backend:
              serviceName: hello-world
              servicePort: 8080
-- Create the ingress
kubectl apply -f ingress-nginx.yaml
-- Verify it was created properly. It displays the hosts and the IP ypu need for the next step
kubectl get ingress
-- Add the following line to the bottom of the /etc/hosts file in your machine. Based on the info you get from previous step. NOTE: if you are using minikube, the IP should be the external cluster IP: minikube ip
-- 192.168.64.2 hello-world.info
-- Make sure that the ingress controller is directing traffic:
curl hello-world.info

Tools

kubectl

complete list Run this command to see instructions on how to enable auto-completion for your shell

kubectl completion -h

Geeting help

kubectl -h
kubectl get -h
kubectl explain pods
kubectl explain pods.kind

Print information in json format

kubectl get pods -o json

Watch for updates instead of re-running the same command multiple times

kubectl get pods -watch

Generate a YAMl file based on an imperative command

kubectl run demo --image=cloudnatived/demo:hello --dry-run -o yaml > sample.yaml

Generate a YAMl file for existing k8s objects (deprecated already, no alternative yet)

kubectl get deployments hello-world -o yaml --export > sample.yaml

Before applying a k8s manifest file (yaml) is considered best practice to use the diff command to check what's going to change or if the current state of the object is out of sync from the YAML file

kubectl diff -f <filename>
kubectl diff -f ingress-nginx.yaml

Switching context (for managing multiple clusters)

// install kubectx and kubens
brew install kubectx
// list all contexts
kubectx
// switch to a given context
kubectx <context-name>
// switch to previous context
kubectx -

Switching namespace (ns are used for isoalting teams/apps within a cluster)

// list all namepsaces
kubens
// switch to a given namespace
kubens <namespace>
// switch to previous namespace
kubectx -

Terminal UI to manage your k8s cluster (k9)

brew install derailed/k9s/k9s
k9s

Operating Clusters

  • k8s cluster: at least 3 master nodes (none if you're using a managed service), at least 2 worker nodes (preferred 3 worker nodes, max: 5k nodes, 150k pods, 300k containers, 100 pods per worker).
  • Federated clusters. 2 or more clusters synchronized running the same workload. e.g resiliency (different cloud providers) or reduced latency (different geo locations).
  • 1 cluster for prod, logically separated by namespaces (1 for team?). 1 cluster for staging and test. One cluster for isolating special workloads like HIPAA-compliant apps.
  • Master capacity at least 1 vCPU, 3-4 GB RAM. The same is true for a worker node.
  • All workers nodes should run a Linux flavor OS, unless you need to run Windows-based app, in which case you need roker nodes running Windows.
  • Don't shut down worker nodes when you don't need them, drain them first.
  • Scale your cluster manually at first to get a understanding of how the resource consumption changes overtime. $$
  • Once your cluster is set-up fo the first time, run Sonobuoy to make sure everthing s working properly.
  • A configuration file validator for Kubernetes for checking custom policies in k8s manifest files, copper or kubeval.
  • Checks whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. kube-bench
  • Enable Audit Logging to record all request to the Cluster API. It logs when, who and what.
  • Automated Chaos testing in your k8s cluster: chaoskube. Make sure that the serviceaccount that Helm uses has the right permissions to list,kill pods in the namespace you want to test. Assign the proper role/clusterrole to that service account. kube-monkey. powerfulseal
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment