This guide has moved to a GitHub repository to enable collaboration and community input via pull-requests.
https://github.com/alexellis/k8s-on-raspbian
Alex
DOWNLOAD_HANDLERS = { | |
'http': 'myspider.socks5_http.Socks5DownloadHandler', | |
'https': 'myspider.socks5_http.Socks5DownloadHandler' | |
} |
# Install tmux on Centos release 6.5 | |
# install deps | |
yum install gcc kernel-devel make ncurses-devel | |
# DOWNLOAD SOURCES FOR LIBEVENT AND MAKE AND INSTALL | |
curl -OL https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz | |
tar -xvzf libevent-2.0.21-stable.tar.gz | |
cd libevent-2.0.21-stable | |
./configure --prefix=/usr/local |
# Create a new working directory and cd into it | |
~$ mkdir -p /path/n5update && cd /path/n5update | |
# Download all files you need | |
/path/n5update$ wget --no-check-certificates https://dl.google.com/dl/android/aosp/hammerhead-lrx21o-factory-01315e08.tgz | |
/path/n5update$ wget --no-check-certificates https://copy.com/pV8d7OdciGi2EUQu/openrecovery-twrp-2.8.1.0-hammerhead.img | |
# @chainfire: I really hope it's not too cheeky, I just so wanted a hotlink for this one. :| As soon as you ask, it's gone. | |
/path/n5update$ wget --no-check-certificates https://copy.com/spuYd3VhHiAULMCL/CF-Auto-Root-hammerhead-hammerhead-nexus5.zip | |
# Extract downloaded file archives |
# import config. | |
# You can change the default config with `make cnf="config_special.env" build` | |
cnf ?= config.env | |
include $(cnf) | |
export $(shell sed 's/=.*//' $(cnf)) | |
# import deploy config | |
# You can change the default deploy config with `make cnf="deploy_special.env" release` | |
dpl ?= deploy.env | |
include $(dpl) |
This guide has moved to a GitHub repository to enable collaboration and community input via pull-requests.
https://github.com/alexellis/k8s-on-raspbian
Alex
The official guide for setting up Kubernetes using kubeadm
works well for clusters of one architecture. But, the main problem that crops up is the kube-proxy
image defaults to the architecture of the master node (where kubeadm
was run in the first place).
This causes issues when arm
nodes join the cluster, as they will try to execute the amd64
version of kube-proxy
, and will fail.
It turns out that the pod running kube-proxy
is configured using a DaemonSet. With a small edit to the configuration, it's possible to create multiple DaemonSets—one for each architecture.
Follow the instructions at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for setting up the master node. I've been using Weave Net as the network plugin; it see
# Download the file: | |
wget --output-document .gitignore https://www.toptal.com/developers/gitignore/api/linux,windows,macos,vim,emacs,jetbrains+all,visualstudiocode,c,c++,go,java,node,python,rust,helm,bazel |