Skip to content

Instantly share code, notes, and snippets.

@juliancheal
Forked from elafargue/A Hypriot K8S install.md
Created January 3, 2019 22:24
Show Gist options
  • Save juliancheal/6d21212b0fa4c9640df9cbd55fe042a9 to your computer and use it in GitHub Desktop.
Save juliancheal/6d21212b0fa4c9640df9cbd55fe042a9 to your computer and use it in GitHub Desktop.
K8s (v1.10.5) on Hypriot (July 2018)

Kubernetes on Hypriot

These are instructions for standing up a Kubernetes cluster with Raspberry Pis with the current Hypriot (1.9.0), with K8S v1.10.5

Thanks to https://gist.github.com/alexellis/fdbc90de7691a1b9edb545c17da2d975 and https://gist.github.com/aaronkjones/d996f1a441bc80875fd4929866ca65ad for doing all the hard work!

Pre-reqs:

  • This was done using a cluster of 5 RPi 3 B+
  • All Pi's are connected via a local ethernet switch on a 10.0.0.0/24 LAN
  • The master node connects to the outside world on WiFi, and provides NAT for the the rest of the cluster.

Prepare the Hypriot cloud-config files

Hypriot lets you flash images using config files, which speeds up initial configuration:

** Note ** there is definitely more fine-tuning that shoudl be done on those cloud-config files to make the install process more efficient.

Image flashing

  • Flash Hypriot to a fresh SD card. Make sure you replace /dev/disk3 with the actual disk number of your MicroSD card.
./flash -u master-user-data -d /dev/disk3 https://github.com/hypriot/image-builder-rpi/releases/download/v1.9.0/hypriotos-rpi-v1.9.0.img.zip
  • Repeat for "node1" to "node4" using the nodeX-user-data file for each node.

You can then connect all your Pis together on a switch, and start them. After a couple of minutes, your master node should appear on your Wifi network as 'corten-master'.

Note on Docker

As of today (see date of this Gist), Docker-ce up to 18.04 works fine. Docker-ce 18.05 fails, so do not upgrade beyond 18.04.

Master node config

generate the master's SSH key

Login to the master node, and run ssh-keygen to initialize your SSH key.

Set a static IP address on master

Login to 'corten-master' and edit /etc/network/interfaces/eth0

auto eth0
iface eth0 inet static
    address 10.0.0.1
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

And disable eth0 in /etc/network/interfaces/50-cloud-init.cfg

  • Add a hostfile for the cluster

Edit /etc/hosts to add the IPs of the rest of the cluster:

10.0.0.1 corten-master
10.0.0.2 corten-node1 node1
10.0.0.3 corten-node2 node2
10.0.0.4 corten-node3 node3
10.0.0.5 corten-node4 node4

Note: including 10.0.0.1 corten-master is important, otherwise the master node will use the wlan0 address by default and this will create all sorts of networking issues later on.

Also, disable the auto-update of /etc/hosts from cloud-config: edit /etc/cloud-config.yml and comment out '-update_etc_hosts'

  • Disable dhcpcd

Disable the built-in DHCP service, because we want to install the isc-dhcp-server

apt-get install isc-dhcp-server

You can then disable the Raspbian/Hypriot DHCP client, otherwise you'll get multiple requests for IP addresses.

sudo systemctl disable dhcpcd.service
sudo systemctl stop dhcpcd.service

You should then edit the /etc/dhcp/dhcpd.conf config to serve IPs on the LAN:

# Domain name for the cluster
option domain-name "cluster";

# We are proxying DNS queries for our slave nodes
option domain-name-servers 10.0.0.1;

# The subnet connected to eth0
# Make sure the range option does not overlap with the static host
# configs below.
subnet 10.0.0.0 netmask 255.255.255.0 {
    range 10.0.0.10 10.0.0.20;
    option subnet-mask 255.255.255.0;
    option broadcast-address 10.0.0.255;
    option routers 10.0.0.1;
}

# REPLACE WITH THE MAC ADDRESS OF YOUR OWN RPi here!
host corten-node1 {
  hardware ethernet b8:27:eb:85:33:72;
  fixed-address corten-node1;
}

# REPLACE WITH THE MAC ADDRESS OF YOUR OWN RPi here!
host corten-node2 {
  hardware ethernet b8:27:eb:90:2c:7b;
  fixed-address corten-node2;
}

# REPLACE WITH THE MAC ADDRESS OF YOUR OWN RPi here!
host corten-node3 {
  hardware ethernet b8:27:eb:1c:0c:e3;
  fixed-address corten-node3;
}

# REPLACE WITH THE MAC ADDRESS OF YOUR OWN RPi here!
host corten-node4 {
  hardware ethernet b8:27:eb:28:8b:b2;
  fixed-address corten-node4;
}

  • Install dnsmasq to decouple the nodes from the external DNS of the master node

We want to make sure we can move the cluster around. For this reason, slave nodes should use the master node for their own DNS:

apt-get install dnsmasq

Then, edit /etc/default/dnsmasq to add the -2 flag to the default options (disable DHCP and TFTP): uncomment the DNSMASQ line and make sure it reads:

DNSMASQ_OPT='-2'

ENABLED should be at 1.

Last, restart dnsmasq:

/etc/init.d/dnsmasq restart

You should then restart your whole cluster, and check in your master's syslog that all nodes are getting the correct IP address.

Setup NAT

You want the master node to be the gateway for the rest of the cluster, and do the NAT for outside world access. You can simply create an init script that will do this (see corten_enable_nat below).

You can enable the script as follows

sudo chmod +x /etc/init.d/corten_enable_nat
sudo update-rc.d corten_enable_nat defaults

Also, edit /etc/sysctl.conf to enable IP routing: uncomment the net.ipv2.ip_forward=1 line if it is commented

net.ipv4.ip_forward=1

Allow quick jump between nodes

Create a ssh key on your master node, that you will add to the .ssh/authorized_keys on each node

$ ssh-keygen

You can choose whether you want to use a password for your key or not. Since the cluster is isolated, you should be OK with no password on the key, as long as you understand that this means that access to the master node will make it possible to access every other node without further authentication.

Install Kubernetes on master node

  • Add repo lists & install kubeadm
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubeadm=1.10.5-00 kubectl=1.10.5-00 kubelet=1.10.5-00

To install a later version, remove the version flag at the end (e.g. sudo apt-get install -qy kubeadm)

Note: if all goes wrong:

sudo apt-get -y remove --purge kubeadm kubectl kubelet && sudo apt-get autoremove -y --purge
sudo rm -rf /var/lib/etcd /var/lib/kubelet /etc/kubernetes /etc/cni
docker stop $(docker ps | awk '{print $1}')
docker rm $(docker ps -a | awk '{print $1}')
docker rmi $(docker images | awk '{print $3}')
  • You now have two new commands installed:

  • kubeadm - used to create new clusters or join an existing one

  • kubectl - the CLI administration tool for Kubernetes

  • Modify 10-kubeadm.conf

This is critical - install will fail otherwise. This removes the CNI driver that is enabled by default. Still need more work on understanding all the implications. It looks like this has to be done on the master only.

$ sudo sed -i '/KUBELET_NETWORK_ARGS=/d' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  • Initialize your master node:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.1

Note: This step will take a long time, even up to 15 minutes.

After the init is complete run the snippet given to you on the command-line:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

This step takes the key generated for cluster administration and makes it available in a default location for use with kubectl.

  • Now save your join-token

Your join token is valid for 24 hours, so save it into a text file.

  • Check everything worked:
$ kubectl get pods --namespace=kube-system
NAME                           READY     STATUS    RESTARTS   AGE                
etcd-of-2                      1/1       Running   0          12m                
kube-apiserver-of-2            1/1       Running   2          12m                
kube-controller-manager-of-2   1/1       Running   1          11m                
kube-dns-66ffd5c588-d8292      3/3       Running   0          11m                
kube-proxy-xcj5h               1/1       Running   0          11m                
kube-scheduler-of-2            1/1       Running   0          11m                
weave-net-zz9rz                2/2       Running   0          5m 

You should see the "READY" count showing as 1/1 for all services as above. DNS uses three pods, so you'll see 3/3 for that.

  • Setup K8S networking

Install Flannel network driver

Note: Edit kube-flannel.yml and the daemonset definition there to specify --iface=eth0 to the flannel arguments to force working with eth0 rather than the wlan interface.

$ curl -sSL https://rawgit.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -

Finish configuring other nodes

System configuration

On all other RPis, do the following:

  • Add the master node's ssh key to the authorized keys:

On the master node:

$ cat ~/.ssh/id_rda.pub

And copy the key. On each node, then do

cat > ~/.ssh/authorized_keys

and paste the master node user's public key.

  • Update the hostfile

Edit /etc/hosts to add the IPs of the rest of the cluster:

10.0.0.1 corten-master
10.0.0.2 corten-node1 node1
10.0.0.3 corten-node2 node2
10.0.0.4 corten-node3 node3
10.0.0.5 corten-node4 node4

Also, disable the auto-update of /etc/hosts from cloud-config: edit /etc/cloud-config.yml and comment out '-update_etc_hosts'

Avoid dhcpcd messing with our cluster interfaces

Edit /etc/dhcpcd.conf and blacklist all the cluster-related interfaces. Also disable wlan0 since it is not used. Simply add this line to the bottom of the file:

denyinterfaces cni*,docker*,wlan*,flannel*,veth*
  • Add repo lists & install kubeadm
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubeadm=1.10.5-00 kubectl=1.10.5-00 kubelet=1.10.5-00
  • Do not modify 10-kubeadm.conf

Join the K8S cluster

  • Join the cluster
$ sudo kubeadm join <master ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:1c06faa186e7f85...

Check the cluster is healthy

You can now run this on the master:

$ kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
corten-master   Ready     master    8h        v1.10.2
corten-node1    Ready     <none>    7h        v1.10.2
corten-node2    Ready     <none>    7h        v1.10.2
corten-node3    Ready     <none>    7h        v1.10.2
corten-node4    Ready     <none>    7h        v1.10.2
$ kubectl get pods --namespace=kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-corten-master                      1/1       Running   3          8h
kube-apiserver-corten-master            1/1       Running   3          8h
kube-controller-manager-corten-master   1/1       Running   3          8h
kube-dns-686d6fb9c-xj9fn                3/3       Running   6          9h
kube-flannel-ds-l6cdc                   1/1       Running   2          7h
kube-flannel-ds-mncvx                   1/1       Running   1          7h
kube-flannel-ds-n6hth                   1/1       Running   1          7h
kube-flannel-ds-x5tgf                   1/1       Running   1          7h
kube-flannel-ds-z2lzq                   1/1       Running   1          7h
kube-proxy-4rb7w                        1/1       Running   1          8h
kube-proxy-7jmqj                        1/1       Running   1          8h
kube-proxy-9vtpp                        1/1       Running   1          8h
kube-proxy-kw2xb                        1/1       Running   1          8h
kube-proxy-pv4hw                        1/1       Running   3          9h
kube-scheduler-corten-master            1/1       Running   3          8h

Setup storage on your cluster

So far, this cluster will be able to run transient services, but you will probably need to use persistent storage at some point for real work. I will document this further, but https://github.com/luxas/kubeadm-workshop in the mean time contains a ton of great pointers, though the overall document is fairly outdated by now...

Deploy a sample container

This container will expose a HTTP port and convert Markdown to HTML. Just post a body to it via curl - follow the instructions below.

function.yml

apiVersion: v1
kind: Service
metadata:
  name: markdownrender
  labels:
    app: markdownrender
spec:
  type: NodePort
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31118
  selector:
    app: markdownrender
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: markdownrender
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: markdownrender
    spec:
      containers:
      - name: markdownrender
        image: functions/markdownrender:latest-armhf
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP

Deploy and test:

$ kubectl create -f function.yml

Once the Docker image has been pulled from the hub and the Pod is running you can access it via curl:

$ curl -4 http://127.0.0.1:31118 -d "# test"
<p><h1>test</h1></p>

If you want to call the service from a remote machine such as your laptop then use the IP address of your Kubernetes master node and try the same again.

Install the K8S dashboard

The dashboard can be useful for visualising the state and health of your system. but doing anything on it beyond simply viewing the state of the system does require "cluster-admin" rights. One possibility is to give the service account the Dashboard is running under those cluster-admin rights. You have to understand that this enables anyone who can connect to the Dashboard full rights over the cluster. If you ever decided to make the Cluster available through anything more than a simple "kubectl proxy", you are essentially giving the keys to the kingdom to everyone and anyone, be warned.

If you want to proceed that way you can create the cluster role binding that gives cluster-admin rights to the Dashboard account:

echo -n 'apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system' | kubectl apply -f -

Alternatively, you can create user accounts with cluster-admin privileges, and use token authentication for those users. For instance, the below will create an elafargue-admin user with cluster-admin privileges:

echo -n 'apiVersion: v1
kind: ServiceAccount
metadata:
  name: elafargue-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: elafargue-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: elafargue-admin
  namespace: kube-system' | kubectl apply -f -

You can then give elafargue-admin their token so that they can login to the Dashboard through http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/login:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secrets | grep elafargue | awk '{print $1}')

You can then proceed to install the actual Dashboard. The command below is the development/alternative dashboard which has TLS disabled and is easier to use if you do not have a CA at hand or don't want to use Let's Encrypt:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml

You can then find the IP and port via kubectl get svc -n kube-system. To access this from your laptop you will need to use kubectl proxy and navigate to http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy on the master, or tunnel to this address with ssh, especially since TLS is not enabled. Note that this URL will lead to authorization errors if you are using individual user accounts as described just above, in which case you'll have to go through the login screen.

The reference doc for Dashboard can be found on Github, including the install instructions.

Check that everything is networking fine

A healthy cluster should now look like this:

$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP           NODE
default       markdownrender-688769d5b9-p95z7         1/1       Running   3          4h        10.244.2.3   corten-node2
kube-system   etcd-corten-master                      1/1       Running   10         2d        10.0.0.1     corten-master
kube-system   kube-apiserver-corten-master            1/1       Running   10         2d        10.0.0.1     corten-master
kube-system   kube-controller-manager-corten-master   1/1       Running   10         2d        10.0.0.1     corten-master
kube-system   kube-dns-686d6fb9c-xj9fn                3/3       Running   24         2d        10.244.0.2   corten-master
kube-system   kube-flannel-ds-flp7s                   1/1       Running   8          3h        10.0.0.1     corten-master
kube-system   kube-flannel-ds-mncvx                   1/1       Running   4          2d        10.0.0.5     corten-node4
kube-system   kube-flannel-ds-n6hth                   1/1       Running   2          2d        10.0.0.4     corten-node3
kube-system   kube-flannel-ds-x5tgf                   1/1       Running   3          2d        10.0.0.2     corten-node1
kube-system   kube-flannel-ds-z2lzq                   1/1       Running   4          2d        10.0.0.3     corten-node2
kube-system   kube-proxy-4rb7w                        1/1       Running   4          2d        10.0.0.3     corten-node2
kube-system   kube-proxy-7jmqj                        1/1       Running   3          2d        10.0.0.2     corten-node1
kube-system   kube-proxy-9vtpp                        1/1       Running   2          2d        10.0.0.4     corten-node3
kube-system   kube-proxy-kw2xb                        1/1       Running   4          2d        10.0.0.5     corten-node4
kube-system   kube-proxy-pv4hw                        1/1       Running   10         2d        10.0.0.1     corten-master
kube-system   kube-scheduler-corten-master            1/1       Running   10         2d        10.0.0.1     corten-master
kube-system   kubernetes-dashboard-64d66bcc8-bvt66    1/1       Running   3          4h        10.244.4.3   corten-node4

Remove the test deployment

Now on the Kubernetes master remove the test deployment:

$ kubectl delete -f function.yml

Working with the cluster

You should now be able to access your cluster remotely. Since it lives on the 10.0.0.0/24 network, you should add a manual route on your computer to be able to reach it. For instance, on a Mac, if the wifi address of the master node of the cluster is 192.168.13.204, you can do:

sudo route add -net 10.0.0.0 192.168.13.204 255.255.255.0

You then need to copy the ~/.kube/config file that is on the master node to your own computer, so that kubectl can talk to the master.

Note that you should also be able to simply edit the config file and update the IP of the master node to the WiFi IP - Kubernetes binds to all interfaces.

Create service accounts for the various namespaces you will be working in

You really don't want to put everything in 'default'. So that you can create limited kube config files that have access to their specific namespaces, https://medium.com/@amimahloof/how-to-setup-helm-and-tiller-with-rbac-and-namespaces-34bf27f7d3c3 is a great Medium post on the topic.

In a nutshell, for each namespace that was created and where you want to use Tiller - replace 'my-namespace' with the right value:

kubectl create serviceaccount --namespace my-namespace tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=my-namespace:tiller

Installing helm

Helm is a popular package manager for Kubernetes, but there are no arm compatible binaries distributed with it. If you want to use it, you will have to compile it from scratch, which is described at https://docs.helm.sh/developers/

You can use make VERSION=2.9.0 APP=helm build-cross then make VERSION=2.9.0 APP=tiller build-cross to build for all architectures, then copy tiller to the rootfs directory, update the alpine image in the Dockerfile to arm32v6/alpine:3.6 and docker build -t tiller:2.9.0 . to build the docker image.

Once this is done, tag your new tiller image properly, and push it to your docker repo - this is simplest, since it is public:

docker tag tiller:2.9.0 elafargue/tiller:arm2.9.0
docker push elafargue/tiller:arm2.9.0

Then, deploy tiller to your cluster and pass --tiller-image with the name of your local image:

helm init --tiller-image elafargue/tiller:arm2.9.0 --tiller-namespace my-namespace --service-account tiller

And after a minute, you'll be able to check that the tiller pod did start on your cluster.

When you deploy images later on, be sure to include --tiller-namespace tiller in the helm command line.


Extra Credits

Pimoroni Blinkt

The Blinkt is a really cool strip of 8 LEDs that plugs into the IO connector of the Raspberry Pi, and is perfect for displaying additional info on each node of the cluster.

Someone wrote a Daemonset for Kubernetes - along with corresponding Docker containers - that enables you to monitor the nodes and the pods in your cluster. This project didn't work on the current go and K8S versions, so I updated it and I am sharing it here

Installation instructions

If you simply want to experiment on the cluster, , git clone https://github.com/elafargue/blinkt-k8s-controller.git and follow the README.md

On a 5-node cluster, after installing the Yaml files for RBAC and the daemonsets, I suggest you tag the nodes as follows:

kubectl label node corten-node1 blinktShow=true
kubectl label node corten-node2 blinktShow=true
kubectl label node corten-node3 blinktShow=true
kubectl label node corten-node4 blinktShow=true
kubectl label node corten-master blinktShow=true

Then enable node monitoring on the master, and pod monitoring on each node:

kubectl label node corten-master blinktImage=nodes
kubectl label node corten-node1 blinktImage=pods
kubectl label node corten-node2 blinktImage=pods
kubectl label node corten-node3 blinktImage=pods
kubectl label node corten-node4 blinktImage=pods

Then, as explained in the README.md, for each deployment (or individually labeled pod) labeled with blinktShow: true, a new LED will be displayed on the Blinkt. Or course, the Blinkt only displays up to 8 pods that way.

Compilation instructions

If you want to compile the image yourself - for instance if you want to modify it -

First of all, you will need to install "go" on your Raspberry Pi. It is not very difficult. @alexellis as usual has a pretty good article on his blog, below are slightly more up to date instructions with the version of Go I validated for this project:

cd
curl -sSLO https://dl.google.com/go/go1.10.3.linux-armv6l.tar.gz
sudo mkdir -p /usr/local/go
sudo tar -xvf go1.10.3.linux-armv6l.tar.gz -C /usr/local/go --strip-components=1

You can then checkout the blink-k8s project in the right location :

cd
mkdir -p go/src/github.com/elafargue
cd go/src/github.com/elafargue
git clone https://github.com/elafargue/blinkt-k8s-controller.git
cd blinkt-k8s-controller

Before going further, you will need to install glide:

curl https://glide.sh/get | sh
export PATH=$HOME/go/bin:$PATH

You can not install the dependencies and build the project:

glide update --strip-vendor
./build.sh

Once the project is built, you can Dockerize it (see the dockerize.sh scripts), and either update the target location in the Docker repository to upload your version of the containers (update the version tag), or just push the image to every Docker on every node, and update the Daemonset YAML to the new tag and imagePullPolicy: Never, so that it doesn't try to pull from Docker - to be fully tested.

#! /bin/sh
### BEGIN INIT INFO
# Provides: routing
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop:
# X-Start-Before: rmnologin
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: Add masquerading for other nodes in the cluster
# Description: Add masquerading for other nodes in the cluster
### END INIT INFO
. /lib/lsb/init-functions
N=/etc/init.d/corten_enable_nat
set -e
case "$1" in
start)
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
iptables -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT
;;
stop|reload|restart|force-reload|status)
;;
*)
echo "Usage: $N {start|stop|restart|force-reload|status}" >&2
exit 1
;;
esac
exit 0
#cloud-config
# vim: syntax=yaml
#
# The current version of cloud-init in the Hypriot rpi-64 is 0.7.6
# When dealing with cloud-init, it is SUPER important to know the version
# I have wasted many hours creating servers to find out the module I was trying to use wasn't in the cloud-init version I had
# Documentation: http://cloudinit.readthedocs.io/en/0.7.9/index.html
# Set your hostname here, the manage_etc_hosts will update the hosts file entries as well
hostname: corten-master
manage_etc_hosts: true
# You could modify this for your own user information
users:
- name: lafargue
gecos: "Ed Lafargue"
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
groups: users,docker,video,input
plain_text_passwd: USE_YOUR_OWN
lock_passwd: false
ssh_pwauth: true
chpasswd: { expire: false }
# # Set the locale of the system
# locale: "en_US.UTF-8"
# # Set the timezone
# # Value of 'timezone' must exist in /usr/share/zoneinfo
# timezone: "America/Los_Angeles"
# # Update apt packages on first boot
# package_update: true
# package_upgrade: true
# package_reboot_if_required: true
package_upgrade: false
# # Install any additional apt packages you need here
# packages:
# - ntp
# # WiFi connect to HotSpot
# # - use `wpa_passphrase SSID PASSWORD` to encrypt the psk
write_files:
- content: |
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp
path: /etc/network/interfaces.d/wlan0
- content: |
country=us
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="YOUR_SSID"
psk=YOUR_PSK_KEY
}
path: /etc/wpa_supplicant/wpa_supplicant.conf
# These commands will be ran once on first boot only
runcmd:
# Pickup the hostname changes
- 'systemctl restart avahi-daemon'
# # Activate WiFi interface
# - 'ifup wlan0'
#cloud-config
# vim: syntax=yaml
#
# The current version of cloud-init in the Hypriot rpi-64 is 0.7.6
# When dealing with cloud-init, it is SUPER important to know the version
# I have wasted many hours creating servers to find out the module I was trying to use wasn't in the cloud-init version I had
# Documentation: http://cloudinit.readthedocs.io/en/0.7.9/index.html
# Set your hostname here, the manage_etc_hosts will update the hosts file entries as well
hostname: corten-node1
manage_etc_hosts: true
# You could modify this for your own user information
users:
- name: lafargue
gecos: "Ed Lafargue"
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
groups: users,docker,video,input
plain_text_passwd: CHANGE_ME
lock_passwd: false
ssh_pwauth: true
chpasswd: { expire: false }
# # Set the locale of the system
# locale: "en_US.UTF-8"
# # Set the timezone
# # Value of 'timezone' must exist in /usr/share/zoneinfo
# timezone: "America/Los_Angeles"
# # Update apt packages on first boot
# package_update: true
# package_upgrade: true
# package_reboot_if_required: true
package_upgrade: false
# # Install any additional apt packages you need here
# packages:
# - ntp
# These commands will be ran once on first boot only
runcmd:
# Pickup the hostname changes
- 'systemctl restart avahi-daemon'
# # Activate WiFi interface
# - 'ifup wlan0'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment