## filter kubernetes response with kubectl 

### make file with all results , then filter into new file
`kubectl get nodes > all_nodes.txt`

`cat all_nodes.txt | while read line ; do if [[ $line == *"sobusy"* ]]; then echo $line; fi; done > filtered_nodes.txt`

### filter results by search criteria
`kubectl logs <pod> <search_criteria> > some_file.txt`

ex `kubectl logs busybox | grep fail > logs.txt`

#### see processes running
`ps aux`


# kubectl commands

`kubectl cluster-info` - get cluster info

## create 

`kubectl create namespace [namespace]` - create a namespace

`kubectl create -f [yaml file] --namespace [nmamespace]` - create a pod in a specific namespace using yaml file

## config

`kubectl config get-contexts` - get contexts

`kubectl config use-context [context name]`  - use different context

## get / describe 

`kubectl get componentstatus` - status of all components

`kubectl get pods -n testing [pod name] -o jsonpath={.spec.containers[*].name}` - get container names from a pod
 
`kubectl get --watch pod [pod name] -n testing` - watch for changes to a pod

`kubectl get events` - get events on the cluster (events are namespaced)

`kubectl get pods -l person=kevin` - get resources by label

`kubectl get node <node> -o wide` - get more info from resource 

`kubectl get pods --show-labels` - show all labels

`kubectl describe pod -n testing [pod name]` - describe pod



 ## logs 

`kubectl logs -f -n testing [pod name]` - get logs from a pod 

`kubectl logs <pod_name> | grep <search_term>` - filter logs from a pod 

`kubectl logs -f -n testing [pod name] -c [container name]` - get logs from a container


### usage 

`kubectl top pods` - get usage info for pods 

`kubectl top pod [pod name] --containers -n prod` - get usage info for containers in a pod 

`kubectl top node [node name]` - get top info for a node 

`kubectl top pod --namespace=C --selector=A=B`

### scale 
`kubectl scale rc production --replicas=6`

## secrets 
### create a secret and update it if it does not exist yet
`kubectl create secret generic kevin-secret --from-file=my_secret.txt --dry-run -o yaml | kubectl apply -f -`

`kubectl create secret generic my-secret --from-literal=key1=supersecret` - secret from literal

### setup simple kubernetes pod with an ubuntu container, expose port 8080 on the node

`kubectl run my_nginx --image=nginx --replicas=2 --port=80` - make simple deployment with two replicas

`kubectl run ubuntu-pod --image=gcr.io/google_containers/ubuntu:14.04 --port=8080`

`kubectl expose deployment ubuntu-pod --type=NodePort` - create service for existing service

## rolling update 
### set an image to a new version on a deployment, which will trigger an update
`kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record`

`kubectl rollout history deployment/nginx-deployment` - check history

`kubectl rollout status deployment nginx-deployment` - check rollout status 

`kubectl label pods --all person=kevin` - attach label to resource

`kubectl label pods --all person- ` - remove label from resource

## drain
`kubectl drain $node --delete-local-data=true --force` (add & to throw into the background)

## dns lookup
### run nginx pods via deployment
```
$ kubectl run nginx-is-dumb --image=nginx --replicas=2 --port=80
deployment "nginx-is-dumb" created
```

### back them by a service
```
$ kubectl expose deployment nginx-is-dumb
service "nginx-is-dumb" exposed
```

```
$ kubectl run busybox --image=busybox --rm --restart=OnFailure -ti -- /bin/nslookup nginx-is-dumb.default
Server:    10.11.240.10
Address 1: 10.11.240.10 kube-dns.kube-system.svc.cluster.local
```
In order to get the pod’s IP address, look up the ip for the pod by running `kubectl describe pod nginx-is-dumb`, and then, inside the busybox temp pod, run:

`kubectl run busybox --image=busybox --rm --restart=OnFailure -ti -- /bin/nslookup <pod-ip>.default.pod.cluster.local`

which will give you four lines of output. We want the bottom two lines.

## other dns lookup
create nginx pod and expose it 
`k run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools`

`nslookup nginx.default.svc.cluster.local`

## etcd backup
`sudo chown -R student:student /opt`

ssh onto etcd node

`sudo su - `

`cat /proc/$(pgrep etcd)/environ | xargs -0 -n1 echo | grep ETCD_DATA`

that will give you the data directory

`etcdctl backup --data-dir <data directory> --backup-dir <target dir>`

## bring components onto node
ssh onto the node you want to setup

```
scp hk8s-node-1:/etc/systemd/system/kubelet.service .
scp hk8s-node-1:/etc/systemd/system/kube-proxy.service .
scp kubelet.service ik8s-node-0:
scp kube-proxy.service ik8s-node-0:
ssh ik8s-node-0 
```
manually edit over there, put into /etc/systemd/system, use systemctl, etc.

## have kubelet bring up a resource 
- ssh onto worker node 
- `sudo su - `
- Edit /etc/systemd/system/kubelet.service
- Add `--pod-manifest-path /etc/kubernetes/manifests`
- `mkdir -p /etc/kubernetes/manifests`  make sure path exists
- Add the pod manifest file (this is the yaml file) to `/etc/kubernetes/manifests`
- `systemcl daemon-reload `
-  ` systemctl restart kubelet.service` or whatever the kubelet service is called, on juju was `sudo systemctl restart snap.kubelet.daemon.service`