# Manual install

As per [Kubernetes.io/docs/getting-started-guides/ubuntu/manual](https://kubernetes.io/docs/getting-started-guides/ubuntu/manual/)

## Add on node0

UNFINISHED!!

```
curl -s -S -L https://raw.githubusercontent.com/webplatform/salt-states/master/webplatform/files/screenrc.jinja -o .screenrc
curl -s -S -L https://raw.githubusercontent.com/webplatform/salt-states/master/users/files/renoirb/gitconfig -o .gitconfig
```

## On node0

```
sudo apt install -y bridge-utils
salt node[1-4] grains.append roles '[kubernetes-pool]'
salt-call grains.append roles '[kubernetes-master]'
salt-call grains.append roles '[salt-master]'
salt-call -l debug pkg.install kubelet,kubeadm,kubectl,kubernetes-cni
```
On all nodes, add to the `/etc/hosts`

    10.1.10.240     kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.kube.local kube.local


Make sure you have network options commented for first init, like so:

```
# In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Add this line, comment the other (temporarily)
Environment="KUBELET_NETWORK_ARGS="
```

...

```
sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sudo systemctl daemon-reload
sudo systemctl restart kubelet.service
sudo systemctl restart docker.service
```

On the Node you want the master, run

```
kubeadm init --apiserver-advertise-address 10.1.10.240 --pod-network-cidr 10.244.0.0/16 --apiserver-cert-extra-sans=kube.local --service-dns-domain kube.local
```

Wait a bit, then:

    mkdir ~/.kube
    sudo cp /etc/kubernetes/admin.conf ~/.kube/config
    sudo chown picocluster:picocluster ~/.kube/config


If all goes well, keep record of `kubeadm join` (see sample below at *Get cluster token*)

If you see error messages like this in logs

```
Apr 21 05:19:40 node0 kubelet[7197]: E0421 05:19:40.637452    7197 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
```

As per [this issue](https://github.com/kubernetes/kubernetes/issues/43815#issuecomment-290235245):

* Temporarily remove `KUBELET_NETWORK_ARGS` from `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`
* Edit your /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add the flag --cgroup-driver="systemd"

```
# In /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Ensure ExecStart has KUBELET_EXTRA_ARGS, and add this line before it
Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd"
```

FOR EACH NODES of the cluster!

You do not need to `kubeadm reset` on master, but rather restart

```
sudo systemctl daemon-reload
sudo systemctl restart kubelet.service
sudo systemctl restart docker.service

docker ps -qs --filter name=etcd_etcd
foo

export ETCDCONTAINER=foo
sudo docker cp $ETCDCONTAINER:/usr/local/bin/etcd /usr/local/bin/etcd
sudo docker cp $ETCDCONTAINER:/usr/local/bin/etcdctl /usr/local/bin/etcdctl
sudo chmod +x /usr/local/bin/etcd{,ctl}
etcdctl set /coreos.com/network/config '{"Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}}'
sudo systemctl status flanneld
sudo systemctl restart flanneld

```






## Get cluster token

See notes from `kubeadm init` comand above

```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.0
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [node0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.kube.local kube.local] and IPs [10.96.0.1 10.1.10.240]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 37.530146 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 3.535083 seconds
[token] Using token: MAH.T0K33N
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token MAH.T0K33N 10.1.10.240:6443
```

If you get all `Ready states`

```
kubectl get nodes

NAME      STATUS    AGE       VERSION
node0     Ready     9m        v1.6.1
node1     Ready     7m        v1.6.1
node2     Ready     7m        v1.6.1
node3     Ready     7m        v1.6.1
node4     Ready     7m        v1.6.1
```

You can also check pods status

```
kubectl get pods -o wide --all-namespaces

NAMESPACE     NAME                            READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   etcd-node0                      1/1       Running   1          8m        192.168.0.103   node0
kube-system   kube-apiserver-node0            1/1       Running   2          7m        192.168.0.103   node0
kube-system   kube-controller-manager-node0   1/1       Running   2          8m        192.168.0.103   node0
kube-system   kube-dns-2286869516-mjt3n       3/3       Running   3          8m        10.244.0.2      node0
kube-system   kube-flannel-ds-1h1tp           2/2       Running   0          1m        192.168.0.112   node2
kube-system   kube-flannel-ds-9w3r4           2/2       Running   2          7m        192.168.0.103   node0
kube-system   kube-flannel-ds-tcm7v           2/2       Running   0          1m        10.1.10.243     node3
kube-system   kube-flannel-ds-z5mz9           2/2       Running   0          1m        10.1.10.241     node1
kube-system   kube-proxy-dzcjr                1/1       Running   0          1m        192.168.0.112   node2
kube-system   kube-proxy-h68m9                1/1       Running   1          8m        192.168.0.103   node0
kube-system   kube-proxy-s8b0g                1/1       Running   0          1m        10.1.10.243     node3
kube-system   kube-proxy-t1wgm                1/1       Running   0          1m        10.1.10.241     node1
kube-system   kube-scheduler-node0            1/1       Running   1          8m        192.168.0.103   node0
```

##RBx

Next step, Networking layer


## Networking layer

UNFINISHED

See [this post](https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/)
and [Install and run Flannel](http://docker-k8s-lab.readthedocs.io/en/latest/docker/docker-flannel.html#install-configure-run-flannel)

```
curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml?raw=true" -o kube-flannel-rbac.yml
curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" -o kube-flannel.yml
sed -i "s/amd64/arm64/g" kube-flannel.yml
```

Then...


```
# As described on top of kube-flannel-rbac.yml
kubectl create -f kube-flannel-rbac.yml
kubectl create --namespace kube-system -f kube-flannel.yml
```


Then we should see

```
picocluster@node0:~$ kubectl get po --all-namespaces

NAMESPACE     NAME                            READY     STATUS    RESTARTS   AGE
...
kube-system   kube-proxy-1vcbd                1/1       Running   0          9m
kube-system   kube-proxy-245nz                1/1       Running   0          9m
kube-system   kube-proxy-7hsc9                1/1       Running   0          11m
kube-system   kube-proxy-dsklx                1/1       Running   0          9m
kube-system   kube-proxy-qs2vn                1/1       Running   0          9m

... AND FLANNEL, not there yet. Because CNI. TODO
```

Install [Kubernetes dashboard](https://github.com/kubernetes/dashboard)

```
curl -sSL https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml | sed "s/amd64/arm64/g" > kubernetes-dashboard.yml
kubectl create -f - kubernetes-dashboard.yml
```


## Delete a deployment on kube-system

```
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl get deployment --namespace=kube-system
```



# See also

* https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
* https://kubernetes.io/docs/admin/service-accounts-admin/