- Bash v5+ checkout Upgrading Bash on macOS
- bash-completion@2
Installing Docker and Kubernetes on MacOS is eazy.
Download and install Docker for Mac
Edge Version. Download Link
After installation, you get Docker
engine with option to enable Kubernetes
and kubectl
cli tool on your MacOS
.
brew install bash-completion@2
Paste this into your ~/.extra or ~/.bash_profile file:
# bash-completion used with Bash v5+
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl
alias k=kubectl
complete -F __start_kubectl k
- After Docker for Mac is installed, configure it with sufficient resources. You can do that via the Advanced menu in Docker for Mac's preferences. Set CPUs to at least 4 and Memory to at least 8.0 GiB.
- Now enable Docker for Mac's Kubernetes capabilities and wait for the cluster to start up.
- Install kubernetic app. This works as replacement for
kubernetes-dashboard
- Follow instructions here and here to setup Istio and Knative.
Skaffold is a command line tool (from Google) that facilitates continuous development for Kubernetes applications. It also provides building blocks and describe customizations for a CI/CD pipeline.
brew install skaffold
skaffold version
helm has client-side cli and server-side tiller
components
Install Helm via brew
. More info Here
# install helm cli on mac with brew
brew install kubernetes-helm
install tiller into the kube-system This will install Tiller to your running Kubernetes cluster. It will also set up any necessary local configuration.
helm init
# check version
helm version
# show if tiller is installed
kubectl get pods --namespace kube-system
# upgrade helm version
helm init --upgrade
# update charts repo
helm repo update
# install postgre chart
# helm install --name nginx stable/nginx-ingress
helm install --name pg --namespace default --set postgresPassword=postgres,persistence.size=1Gi stable/postgresql
kubectl get pods -n default
# list installed charts
helm ls
# delete postgre
$ helm delete my-postgre
# delete postgre and purge
$ helm delete --purge my-postgre
helm create mychart
This will create a folder which includes all the files necessary to create your own package :
├── Chart.yaml
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ └── service.yaml
└── values.yaml
optionally add helm-secrets
plugin
helm plugin install https://github.com/futuresimple/helm-secrets
based on Docker for Mac with Kubernetes — Ingress Controller with Traefik
cd .deploy/traefik
-
Create a file called
traefik-values.yaml
.dashboard: enabled: true domain: traefik.k8s ssl: enabled: true insecureSkipVerify: true kubernetes: namespaces: - default - kube-system
-
Install the Traefik Chart and check if the pod is up and running.
helm install stable/traefik --name=traefik --namespace=kube-system -f traefik-values.yaml kubectl get pods --namespace=kube-system kubectl get ingress traefik-dashboard --namespace=kube-system -o yaml # to see traefik logs kubectl logs $(kubectl get pods --namespace=kube-system -lapp=traefik -o jsonpath='{.items[0].metadata.name}') -f --namespace=kube-system # To update, if you change `traefik-values.yaml` later helm upgrade --namespace=kube-system -f traefik-values.yaml traefik stable/traefik
-
Add your domains to MacOS
/etc/hosts
as needed. Other options:wildcard DNS in localhost development
1, 2127.0.0.1 localhost traefik.k8s web.traefik.k8s keycloak.traefik.k8s
-
Deploying the K8s dashboard and check if the pod is up and running.
cd .deploy/traefik git clone https://github.com/thmshmm/chart-k8s-dashboard.git k8s-dshbrd/ helm install k8s-dshbrd --name kubernetes-dashboard --namespace=kube-system kubectl get ingress kubernetes-dashboard --namespace=kube-system -o yaml
cli tool to conver Docker Compose files to Kubernetes
# install
brew install kompose
# to use
kompose convert -f docker-compose.yaml
optionally add Kubernetes prompt info for bash
brew install kube-ps1
kubefwd is a command line utility built to port forward some or all pods within a Kubernetes namespace
# If you are running MacOS and use homebrew you can install kubefwd directly from the txn2 tap:
brew install txn2/tap/kubefwd
# To upgrade
brew upgrade kubefwd
# Forward all services for the namespace the-project:
sudo kubefwd services -n the-project
# Forward all services for the namespace the-project where labeled system: wx:
sudo kubefwd services -l system=wx -n the-project
To read more on kubectl, check out the Kubectl Cheat Sheet.
commonly used Kubectl commands
you can pratice kubectl commands at katacoda playground
kubectl version
kubectl cluster-info
kubectl get storageclass
kubectl get nodes
kubectl get ep kube-dns --namespace=kube-system
kubectl get persistentvolume
kubectl get PersistentVolumeClaim --namespace default
kubectl get pods --namespace kube-system
kubectl get ep
kubectl get sa
kubectl get serviceaccount
kubectl get clusterroles
kubectl get roles
kubectl get ClusterRoleBinding
# Show Merged kubeconfig settings.
kubectl config view
kubectl config get-contexts
# Display the current-context
kubectl config current-context
kubectl config use-context docker-desktop
kubectl port-forward service/ok 8080:8080 8081:80 -n the-project
# Delete evicted pods
kubectl get po --all-namespaces | awk '{if ($4 ~ /Evicted/) system ("kubectl -n " $1 " delete pods " $2)}'
Execute the kubectl Command for Creating Namespaces
# Namespace for Developers
kubectl create -f namespace-dev.json
# Namespace for Testers
kubectl create -f namespace-qa.json
# Namespace for Production
kubectl create -f namespace-prod.json
Assign a Context to Each Namespace
# Assign dev context to development namespace
kubectl config set-context dev --namespace=dev --cluster=minikube --user=minikube
# Assign qa context to QA namespace
kubectl config set-context qa --namespace=qa --cluster=minikube --user=minikube
# Assign prod context to production namespace
kubectl config set-context prod --namespace=prod --cluster=minikube --user=minikube
Switch to the Appropriate Context
# List contexts
kubectl config get-contexts
# Switch to Dev context
kubectl config use-context dev
# Switch to QA context
kubectl config use-context qa
# Switch to Prod context
kubectl config use-context prod
kubectl config current-context
see cluster-info
kubectl cluster-info
nested kubectl commands
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8082:8088
kubectl proxy creates proxy server between your machine and Kubernetes API server. By default it is only accessible locally (from the machine that started it).
kubectl proxy --port=8080
curl http://localhost:8080/api/
curl http://localhost:8080/api/v1/namespaces/default/pods
# get all the logs for a given pod:
kubectl logs my-pod-name
# keep monitoring the logs
kubectl -f logs my-pod-name
# Or if you have multiple containers in the same pod, you can do:
kubectl -f logs my-pod-name internal-container-name
# This allows users to view the diff between a locally declared object configuration and the current state of a live object.
kubectl alpha diff -f mything.yml
kubectl exec -it my-pod-name -- /bin/sh
Redeploy newly build image to existing k8s deployment
BUILD_NUMBER = 1.5.0-SNAPSHOT // GIT_SHORT_SHA
kubectl diff -f sample-app-deployment.yaml
kubectl -n=staging set image -f sample-app-deployment.yaml sample-app=xmlking/ngxapp:$BUILD_NUMBER
Once you run
kubectl apply -f manifest.yml
# To get all the deploys of a deployment, you can do:
kubectl rollout history deployment/DEPLOYMENT-NAME
# Once you know which deploy you’d like to roll back to, you can run the following command (given you’d like to roll back to the 100th deploy):
kubectl rollout undo deployment/DEPLOYMENT_NAME --to-revision=100
# If you’d like to roll back the last deploy, you can simply do:
kubectl rollout undo deployment/DEPLOYMENT_NAME
# Show resource utilization per node:
kubectl top node
# Show resource utilization per pod:
kubectl top pod
# if you want to have a terminal show the output of these commands every 2 seconds without having to run the command over and over you can use the watch command such as
watch kubectl top node
# --v=8 for debuging
kubectl get po --v=8
k get ep
# ssh to one of the container and run dns check:
host <httpd-discovery>
alias k="kubectl"
alias watch="watch "
alias kg="kubectl get"
alias kgdep="kubectl get deployment"
alias ksys="kubectl --namespace=kube-system"
alias kd="kubectl describe"
alias bb="kubectl run busybox --image=busybox:1.30.1 --rm -it --restart=Never --command --"
you can use
busybox
for debuging inside cluster
bb nslookup demo
bb wget -qO- http://demo:8888
bb sh
for better security add following securityContext settings to manifest
securityContext:
# Blocking Root Containers
runAsNonRoot: true
# Setting a Read-Only Filesystem
readOnlyRootFilesystem: true
# Disabling Privilege Escalation
allowPrivilegeEscalation: false
# For maximum security, you should drop all capabilities, and only add specific capabilities if they’re needed:
capabilities:
drop: ["all"]
add: ["NET_BIND_SERVICE"]
For many steps here you will want to see what a Pod
running in the k8s cluster sees. The simplest way to do this is to run an interactive busybox Pod
:
kubectl run -it --rm --restart=Never busybox --image=busybox sh
Ephemeral containers are useful for interactive troubleshooting when kubectl exec
is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless
images.
This allows a user to inspect a running pod without restarting it and without having to enter the container itself to, for example, check the filesystem, execute additional debugging utilities, or initial network requests from the pod network namespace. Part of the motivation for this enhancement is to also eliminate most uses of SSH for node debugging and maintenance
# First, create a pod for the example:
kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
# add a debugging container
kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
# generate a kubernetes tls file
kubectl create secret tls keycloak-secrets-tls \
--key tls.key --cert tls.crt \
-o yaml --dry-run > 02-keycloak-secrets-tls.yml
in iTerm2
- split screen horizontally
- go to the bottom screen and split it vertically
I was using top screen for the work with yaml files and kubectl.
Left bottom screen was running:
watch kubectl get pods
Right bottom screen was running:
watch "kubectl get events --sort-by='{.lastTimestamp}' | tail -6"
With such setup it was easy to observe in real time how my pods are being created.