kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
minikube service hello-minikube --url
kubectl config get-contexts
kubectl config use-context <context_name>
Command | Description |
---|---|
kubectl get pod |
Get information about all running pods |
kubectl describe pod <pod> |
Describe one pod |
kubectl expose pod <pod> --port=444 --name=frontend |
Expose the port of a pod (create a new services) |
kubectl port-forward <pod> 8080 |
Port forward the exposed pod port to your local machine |
kubectl attach <podname> -i |
Attach to the pod |
kubectl exec <pod> -- command |
Execute a command on the pod |
kubectl label pods <pod> mylabel=awesome |
Add a new label to the pod |
kubectl run -i --tty busyox --image=busybox --restart=Never -- sh |
Run a shell in a pod - very useful for debugging |
kubectl scale --replicas=4 rc/<rc>
Command | Description |
---|---|
kubectl get deployments |
Get information on current deployments |
kubectl get rs |
Get information about the replica sets |
kubectl get pod --show-labels |
Get pod, and also show labels attached to those pods |
kubectl rollout status deployment/<deployment> |
Get deployment status |
kubectl set image deployment/<deployment> <image>=<image>:2 |
Run <image> with the image label version 2 |
kubectl edit deployment/<deployment> |
Edit the deployment object |
kubectl rollout status deployment/<deployment> |
Get the status of the rollout |
kubectl rollout history deployment/<deployment> |
Get the rollout history |
kubectl rollout undo deployment/<deployment> |
Rollback to previous version |
kubectl rollout undo deployemnt/<deployment> --to-revision=n |
Rollback to any version |
Labels are useful if you want a certain workload only run on certain type of node(s). For example, you might want to run machine learning workload on a GPU enabled node.
You can label a node using this command
// kubectl label nodes minikube hardware=high-spec
kubectl label nodes <node_name> <key>=<value>
// Show the labels
kubectl get nodes --show-labels
You can target node(s) in the pod definition using nodeSelector
. See https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
If your applications malfunctions, the pod and container can be still be running, but the application might not work anymore.
To detect and resolve problems with your application, you can run health checks.
You can run 2 different type of health checks
- Running a command in the container periodically
- Periodic checks on a URL (HTTP)
The typical production application behind a load balancer should always have health checks implemented in some way to ensure availability and resiliency of the app.
Example:
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 15
timeoutSeconds: 30
Besides livenessProbes, you can also use readinessProbes on a container within a Pod
livenessProbes: indicates whether a container is running. If the check fails, the container will be restarted
readinessProbes: indicates whether the container is ready to serve request. If the check fails, the container will not be restarted, but the Pods' IP address will be removed from the Service, so it'll not serve any requests anymore
The readiness test will make sure that at startup, the pod will only receive traffic when the test succeeds
You can use them in conjunction, and you can configure different tests for them
If you container always exits when something goes wrong, you don't need a livenessProbe
In general, you configure both the livenessProbes and the readinessProbes
Secrets provides a way in Kubernetes to distribute credentials, keys, passwords or "secret" data to the pods
Kubernetes itself uses this Secrets mechanism to provide the credentials to access the internal API
You can also use the same mechanism to provide secrets to your application
Secrets is one way to provide secrets, native to Kubernetes. There are still other ways your container can get its secrets if you don't want to use Secrets (e.g. using an external vault services in your app)
Secrets can be used in the following ways:
- Use secrets as environment variables
- Use secrets as a file in a pod
- This setup uses volumes to be mounted in a container
- In this volume you have files
- Can be used for instance for dotenv files or your app can just read this file
- Use an external image to pull secrets (from a private image registry)
To generate secrets using files:
echo -n "root" > ./username.txt
echo -n "password" > ./password.txt
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
A secret can also be an SSH key or an SSL certificate
kubectl create secret generic ssl-certificate --from-file=ssh-privatekey=~/.ssh/id_rsa --ssl-cert-=mysslcert.crt
- https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod
- https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Configuration parameters that are not secret, can be put in a ConfigMap
The input is again key-value pairs
The ConfigMap key-value pairs can then be read by the app using:
- Environment variables
- Container commandline arguments in the Pod configuration
- Using volumes
Ingress is a solution available since Kubernetes 1.1 that allows inbound connections to the cluster
It's an alternative to the external Loadbalancer and nodePorts. Ingress allows you to easily expose services that need to be accessible from outside to the cluster
With ingress you can run you own ingress controller (basically a loadbalancer) within the Kubernetes cluster
There are default ingress controllers available, or you can write your own ingress controller
Pod Presets can inject information into pods at runtime
Pod Presets are used to inject Kubernetes Resources like Secrets, ConfigMaps, Volumes and Environment variables
Imagine you have 20 applications you want to deploy, and they all need to get a specific credential
- You can edit the 20 specifications and add the credential, or
- You can use presets to create 1 Preset object, which will inject an environment variable or config file to all matching pods
When injecting Environment variables and VolumeMounts, the Pod Presets will apply the changes to all containers within the pod
This is an example of a Pod Preset
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: share-credentials
spec:
selector:
matchLabels:
app: myapp
env:
- name: MY_SECRET
value: "123456"
volumeMounts:
- name: share-volume
mountPath: /share
volumes:
- name: share-volume
emptyDir: {}
It is introduced to be able to run stateful applications that need a stable pod hostname (instead of podname-randomstring)
Your podname will have a sticky identity, using an index, e.g. podname-0, podname-1 and podname-2 (and when a pod gets rescheduled, it'll keep that identify)
Statefulsets allow stateful apps stable storage with volumes based on their ordinal number (podname-x)
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet (preserving data)
A StatefulSet will also allow your stateful app to order the startup and teardown:
- Instead of randomly terminating one pod (one instance of your app), you'll know which one that will go
- When scaling up it goes from 0 to n-1 (n = replication factor)
- When scaling down it starts with the highest number (n-1) to 0
- This is useful if you first need to drain the data from the node before it can be shut down