Skip to content

Instantly share code, notes, and snippets.

@svanellewee
Last active January 16, 2020 13:09
Show Gist options
  • Save svanellewee/62b1f6df823fbfc0515c3d03ebbe7962 to your computer and use it in GitHub Desktop.
Save svanellewee/62b1f6df823fbfc0515c3d03ebbe7962 to your computer and use it in GitHub Desktop.
Making etcd and kube-apiserver work together on a single virtualbox
#!/usr/bin/env bash
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
#apt-get install -y kubelet=1.15.1-00 kubeadm=1.15.1-00 kubectl=1.15.1-00
apt-get install -y docker-ce docker-ce-cli containerd.io
#apt-get install -y avahi-daemon libnss-mdns
cat <<"EOF" > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl restart docker.service
docker pull calico/node:v3.3.7
docker pull calico/cni:v3.3.7
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson
chmod +x cfssl{,json}
mv cfssl{,json} /usr/local/bin
wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz"
mkdir /tmp/etcd
tar -xvf etcd-v3.4.0-linux-amd64.tar.gz -C /tmp/etcd 2>&1 | grep -v SCHILY
mv /tmp/etcd/etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-scheduler"
for elem in kube-apiserver kube-controller-manager kube-scheduler
do
chmod +x ${elem}
mv ${elem} /usr/local/bin
done
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz \
https://github.com/containerd/containerd/releases/download/v1.2.9/containerd-1.2.9.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubelet
mv runc{.amd64,}
for elem in kubelet kube-proxy runc
do
chmod +x "${elem}"
mv "${elem}" /usr/local/bin
done
#!/usr/bin/env bash
TARGET_DIR=/var/vm-shared/tmp/pems
KUBERNETES_PEM="${TARGET_DIR}/kubernetes.pem"
KUBERNETES_KEY_PEM="${TARGET_DIR}/kubernetes-key.pem"
CA_PEM="${TARGET_DIR}/ca.pem"
CA_KEY_PEM="${TARGET_DIR}/ca-key.pem"
CA_CSR="${TARGET_DIR}/ca.csr"
ADMIN_ACCOUNT_PEM="${TARGET_DIR}/admin.pem"
ADMIN_ACCOUNT_KEY_PEM="${TARGET_DIR}/admin-key.pem"
SERVICE_ACCOUNT_PEM="${TARGET_DIR}/service-account.pem"
SERVICE_ACCOUNT_KEY_PEM="${TARGET_DIR}/service-account-key.pem" # not really used it seems
KUBE_CONTROLLER_MANAGER_PEM="${TARGET_DIR}/kube-controller-manager.pem"
KUBE_CONTROLLER_MANAGER_KEY_PEM="${TARGET_DIR}/kube-controller-manager-key.pem"
KUBE_CONTROLLER_MANAGER_KUBECONFIG="${TARGET_DIR}/kube-controller-manager.kubeconfig"
KUBE_SCHEDULER_PEM="${TARGET_DIR}/kube-scheduler.pem"
KUBE_SCHEDULER_KEY_PEM="${TARGET_DIR}/kube-scheduler-key.pem"
KUBE_SCHEDULER_KUBECONFIG="${TARGET_DIR}/kube-scheduler.kubeconfig"
ENCRYPTION_CONFIG="${TARGET_DIR}/encryption-config.yaml"
function get-ip() {
echo "$(ip addr show enp0s8 | grep -Po "inet \K([\d\.]+)")"
}
function make-ca() {
if [[ -f "${CA_PEM}" ]] && [[ -f "${CA_KEY_PEM}" ]] && [[ -f "${CA_CSR}" ]]
then
echo >&2 "not doing anything, CA exists!"
return
fi
local ca_csr_file="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${ca_csr_file}"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-initca "${ca_csr_file}" | cfssljson -bare ca
}
function make-admin-ca() {
make-ca
local admin_csr="$(mktemp /tmp/YADDA_XXX)"
if [[ -f "${ADMIN_ACCOUNT_PEM}" ]] && [[ -f "${ADMIN_ACCOUNT_KEY_PEM}" ]]
then
echo >&2 "ADMIN account pems already generated."
return
fi
cat <<- EOF > "${admin_csr}"
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=${CA_PEM} \
-ca-key=${CA_KEY_PEM}\
-config=$(make-ca-config) \
-profile=kubernetes \
"${admin_csr}" | cfssljson -bare admin
}
function make-admin-kubeconfig() {
set -x
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority="${CA_PEM}" \
--embed-certs=true \
--server=https://127.0.0.1:6443
kubectl config set-credentials admin \
--client-certificate="${ADMIN_ACCOUNT_PEM}" \
--client-key="${ADMIN_ACCOUNT_KEY_PEM}"
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
set +x
}
function add-to-hosts() {
grep -q worker /etc/hosts
if [[ "$?" -eq 0 ]]
then
echo >&2 "not going to modify hosts"
return
fi
cat <<- "EOF" > /tmp/temp-hosts
192.168.2.30 worker-0
192.168.2.31 worker-1
192.168.2.32 worker-2
192.168.2.20 controller-0
192.168.2.21 controller-1
192.168.2.22 controller-2
EOF
cat /etc/hosts >> /tmp/temp-hosts
cp /tmp/temp-hosts /etc/hosts
}
function make-ca-config() {
echo >&2 "calling the ca config function"
local ca_config_file="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${ca_config_file}"
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
echo "${ca_config_file}"
}
function make-cert-service-account() {
if [[ -f "${SERVICE_ACCOUNT_PEM}" ]] && [[ -f "${SERVICE_ACCOUNT_KEY_PEM}" ]]
then
echo >&2 "Service account pems already generated."
return
fi
#make-ca
local ca_config_file="$(make-ca-config)"
local service_account_csr="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${service_account_csr}"
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}" \
-config="${ca_config_file}" \
-profile=kubernetes \
"${service_account_csr}" | cfssljson -bare service-account
}
function make-cert-kubernetes-api-server() {
make-ca
if [[ -f "${KUBERNETES_KEY_PEM}" ]] && [[ -f "${KUBERNETES_PEM}" ]]
then
echo >&2 "api server certs already exist, doing nothing!"
return
fi
local ca_config_file="$(make-ca-config)"
rm -fr kubernetes{,-key}.pem
local controller_count=${1:-1}
local X=($(for (( index=0; index < "${controller_count}"; index++ )) do echo "192.168.2.2${index}"; done))
local KUBERNETES_CONTROLLER_ADDR_STRING="$(echo "$(IFS=,; echo "${X[*]}")")"
local KUBERNETES_HOSTNAMES="kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local"
local kubernetes_csr=$(mktemp /tmp/kube-api-server-csrXXX)
cat <<-EOF > "${kubernetes_csr}"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}" \
-config="${ca_config_file}" \
-hostname=10.32.0.1,${KUBERNETES_CONTROLLER_ADDR_STRING},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
"${kubernetes_csr}" | cfssljson -bare kubernetes
}
function make-config-encryption-key() {
if [[ -f "${ENCRYPTION_CONFIG}" ]]
then
echo >&2 "already created crypt config"
return
fi
local ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat <<- EOF > "${ENCRYPTION_CONFIG}"
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
}
function run-apiserver() {
local controller_count="${1:-3}"
local X=($(for (( index=0; index < "${controller_count}"; index++ )) do echo "https://192.168.2.2${index}:2379"; done))
local ETCD_ADDR_STRING="$(echo "$(IFS=,; echo "${X[*]}")")"
/usr/local/bin/kube-apiserver --advertise-address=$(get-ip) \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file="${CA_PEM}" \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--etcd-cafile="${CA_PEM}" \
--etcd-certfile="${KUBERNETES_PEM}" \
--etcd-keyfile="${KUBERNETES_KEY_PEM}" \
--etcd-servers="${ETCD_ADDR_STRING}" \
--event-ttl=1h \
--encryption-provider-config="${ENCRYPTION_CONFIG}" \
--kubelet-certificate-authority="${CA_PEM}" \
--kubelet-client-certificate="${KUBERNETES_PEM}" \
--kubelet-client-key="${KUBERNETES_KEY_PEM}" \
--kubelet-https=true \
--runtime-config=api/all \
--service-account-key-file="${SERVICE_ACCOUNT_PEM}" \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file="${KUBERNETES_PEM}" \
--tls-private-key-file="${KUBERNETES_KEY_PEM}" \
--v=6
}
function run-etcd() {
local name="${HOSTNAME}"
local URL="https://$(get-ip)"
/usr/local/bin/etcd --name "${name}" \
--cert-file="${KUBERNETES_PEM}" \
--key-file="${KUBERNETES_KEY_PEM}" \
--peer-cert-file="${KUBERNETES_PEM}" \
--peer-key-file="${KUBERNETES_KEY_PEM}" \
--trusted-ca-file="${CA_PEM}" \
--peer-trusted-ca-file="${CA_PEM}" \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls "${URL}:2380" \
--listen-peer-urls "${URL}:2380" \
--listen-client-urls "${URL}:2379",https://127.0.0.1:2379 \
--advertise-client-urls "${URL}:2379" \
--initial-cluster-token 'etcd-cluster-0' \
--initial-cluster="controller-0=https://192.168.2.20:2380,controller-1=https://192.168.2.21:2380,controller-2=https://192.168.2.22:2380" \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
}
function check-etcd() {
ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert="${CA_PEM}" \
--cert="${KUBERNETES_PEM}" \
--key="${KUBERNETES_KEY_PEM}"
}
function make-cert-controller-manager() {
if [[ -f "${KUBE_CONTROLLER_MANAGER_PEM}" ]] && [[ -f "${KUBE_CONTROLLER_MANAGER_KEY_PEM}" ]]
then
echo >&2 "not createing controller manager certs"
return
fi
cat > kube-controller-manager-csr.json <<- EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}"\
-config="$(make-ca-config)" \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority="${CA_PEM}" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate="${KUBE_CONTROLLER_MANAGER_PEM}" \
--client-key="${KUBE_CONTROLLER_MANAGER_KEY_PEM}" \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
function run-controller-manager() {
/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file="${CA_PEM}" \
--cluster-signing-key-file="${CA_KEY_PEM}" \
--kubeconfig="${KUBE_CONTROLLER_MANAGER_KUBECONFIG}" \
--leader-elect=true \
--root-ca-file="${CA_PEM}" \
--service-account-private-key-file="${SERVICE_ACCOUNT_KEY_PEM}" \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
}
function make-cert-scheduler() {
if [[ -f "${KUBE_SCHEDULER_PEM}" ]] && [[ -f "${KUBE_SCHEDULER_KEY_PEM}" ]]
then
echo >&2 "Kube scheduler certs already created"
fi
cat > kube-scheduler-csr.json <<- EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}" \
-config=$(make-ca-config) \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority="${CA_PEM}" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate="${KUBE_SCHEDULER_PEM}" \
--client-key="${KUBE_SCHEDULER_KEY_PEM}" \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
function run-scheduler() {
cat <<- EOF | tee /tmp/config.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "${KUBE_SCHEDULER_KUBECONFIG}"
leaderElection:
leaderElect: true
EOF
/usr/local/bin/kube-scheduler \
--config=/tmp/config.yaml \
--v=2
}
#if [[ ! -d "/tmp/etcd" ]]
#then
# mkdir /tmp/etcd
# tar -xvf /home/vagrant/etcd-v3.4.0-linux-amd64.tar.gz -C /tmp/etcd 2>&1 | grep -v SCHILY
# mv /tmp/etcd/etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
#fi
mkdir -p "${TARGET_DIR}"
cd "${TARGET_DIR}"
add-to-hosts
make-ca
make-cert-kubernetes-api-server 3
make-cert-service-account
make-config-encryption-key
make-cert-controller-manager
make-cert-scheduler
function run-stuff() {
run-etcd > /tmp/etcd.log 2>&1 & echo $! > /tmp/etcd.pid
run-apiserver > /tmp/api-server.log 2>&1 & echo $! > /tmp/api-server.pid
run-controller-manager > /tmp/controller-manager.log 2>&1 & echo $! > /tmp/manager.pid
run-scheduler > /tmp/scheduler.log 2>&1 & echo $! > /tmp/scheduler.pid
}
#!/usr/bin/env bash
TARGET_DIR=/var/vm-shared/tmp/pems
KUBERNETES_PEM="${TARGET_DIR}/kubernetes.pem"
KUBERNETES_KEY_PEM="${TARGET_DIR}/kubernetes-key.pem"
CA_PEM="${TARGET_DIR}/ca.pem"
CA_KEY_PEM="${TARGET_DIR}/ca-key.pem"
CA_CSR="${TARGET_DIR}/ca.csr"
SERVICE_ACCOUNT_PEM="${TARGET_DIR}/service-account.pem"
SERVICE_ACCOUNT_KEY_PEM="${TARGET_DIR}/service-account-key.pem" # not really used it seems
KUBE_CONTROLLER_MANAGER_PEM="${TARGET_DIR}/kube-controller-manager.pem"
KUBE_CONTROLLER_MANAGER_KEY_PEM="${TARGET_DIR}/kube-controller-manager-key.pem"
KUBE_CONTROLLER_MANAGER_KUBECONFIG="${TARGET_DIR}/kube-controller-manager.kubeconfig"
KUBE_SCHEDULER_PEM="${TARGET_DIR}/kube-scheduler.pem"
KUBE_SCHEDULER_KEY_PEM="${TARGET_DIR}/kube-scheduler-key.pem"
KUBE_SCHEDULER_KUBECONFIG="${TARGET_DIR}/kube-scheduler.kubeconfig"
WORKER_PEM="${TARGET_DIR}/${HOSTNAME}.pem"
WORKER_CSR="${TARGET_DIR}/${HOSTNAME}.csr"
WORKER_KEY_PEM="${TARGET_DIR}/${HOSTNAME}-key.pem"
WORKER_KUBECONFIG="${TARGET_DIR}/${HOSTNAME}.kubeconfig"
KUBE_PROXY_PEM="${TARGET_DIR}/kube-proxy.pem"
KUBE_PROXY_CSR="${TARGET_DIR}/kube-proxy.csr"
KUBE_PROXY_KEY_PEM="${TARGET_DIR}/kube-proxy-key.pem"
KUBE_PROXY_KUBECONFIG="${TARGET_DIR}/kube-proxy.kubeconfig"
function get-ip() {
echo "$(ip addr show enp0s8 | grep -Po "inet \K([\d\.]+)")"
}
function get-cidr() {
echo "$(ip addr show dev enp0s8| grep -Po 'inet \K([0-9\.\/]+)')"
}
function add-to-hosts() {
grep -q worker /etc/hosts
if [[ "$?" -eq 0 ]]
then
echo >&2 "not going to modify hosts"
return
fi
cat <<- "EOF" > /tmp/temp-hosts
192.168.2.30 worker-0
192.168.2.31 worker-1
192.168.2.32 worker-2
192.168.2.20 controller-0
192.168.2.21 controller-1
192.168.2.22 controller-2
EOF
cat /etc/hosts >> /tmp/temp-hosts
cp /tmp/temp-hosts /etc/hosts
}
echo "BEFORE DEF"
# Copied from controller-X provision.sh. Common lib candidate!
function make-ca() {
if [[ -f "${CA_PEM}" ]] && [[ -f "${CA_KEY_PEM}" ]] && [[ -f "${CA_CSR}" ]]
then
echo >&2 "not doing anything, CA exists!"
return
fi
local ca_csr_file="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${ca_csr_file}"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-initca "${ca_csr_file}" | cfssljson -bare ca
}
# Copied from controller-X provision.sh. Common lib candidate!
function make-ca-config() {
echo >&2 "calling the ca config function"
local ca_config_file="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${ca_config_file}"
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
echo "${ca_config_file}"
}
function make-worker-cert() {
if [[ ! -f "${CA_PEM}" ]] || [[ ! -f "${CA_KEY_PEM}" ]]
then
echo >&2 "no CA files generated!!!!"
return 1
fi
if [[ -f "${WORKER_PEM}" ]] && [[ -f "${WORKER_KEY_PEM}" ]] && [[ -f "${WORKER_CSR}" ]]
then
echo >&2 "not doing anything ${WORKER_PEM} exists!"
return
fi
local instance="${HOSTNAME}"
cat <<- EOF > ${instance}-csr.json
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
#EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
# --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
#INTERNAL_IP=$(gcloud compute instances describe ${instance} \
# --format 'value(networkInterfaces[0].networkIP)')
local ca_config_file=$(make-ca-config)
local INTERNAL_IP="$(get-ip)"
# assumes make-ca was run by controller provisioner run...
cfssl gencert \
-ca=${CA_PEM} \
-ca-key=${CA_KEY_PEM} \
-config=${ca_config_file} \
-hostname=${instance},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
}
function make-conf-cni() {
local POD_CIDR="192.168.2.0/24"
cat <<- EOF | tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
cat <<- EOF | tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"name": "lo",
"type": "loopback"
}
EOF
}
function make-conf-containerd() {
cat <<- EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: true
EOF
cat <<- EOF | tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
}
function start-containerd() {
make-conf-containerd
/sbin/modprobe overlay
/bin/containerd --log-level info
}
make-proxy-certif() {
if [[ -f "${KUBE_PROXY_PEM}" ]] && [[ -f "${KUBE_PROXY_KEY_PEM}" ]]
then
echo >&2 "not recreating proxy pems"
return
fi
cat<<- EOF > ${KUBE_PROXY_CSR}
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}" \
-config="$(make-ca-config)" \
-profile=kubernetes \
${KUBE_PROXY_CSR} | cfssljson -bare kube-proxy
}
function make-proxy-kubeconfig() {
make-proxy-certif
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=${CA_PEM} \
--embed-certs=true \
--server=https://192.168.2.20:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=${KUBE_PROXY_PEM} \
--client-key=${KUBE_PROXY_KEY_PEM} \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
make-proxy-kubeconfig
function make-worker-kubeconfig() {
local instance="${HOSTNAME}"
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=${CA_PEM} \
--embed-certs=true \
--server=https://192.168.2.20:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${WORKER_PEM} \
--client-key=${WORKER_KEY_PEM} \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
}
function make-conf-kubelet() {
# Hacked this with "AlwaysAllow" instead of "Webhook". No luck making it work that way ... yet
# https://groups.google.com/forum/#!topic/kubernetes-users/qpoqsHpKoas
make-worker-cert
make-worker-kubeconfig
cp ${WORKER_KEY_PEM} ${WORKER_PEM} /var/lib/kubelet/
cp ${WORKER_KUBECONFIG} /var/lib/kubelet/kubeconfig
cp ${CA_PEM} /var/lib/kubernetes/
local POD_CIDR="$(get-cidr)"
cat <<- EOF | tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "${CA_PEM}"
authorization:
mode: AlwaysAllow
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "${WORKER_PEM}"
tlsPrivateKeyFile: "${WORKER_KEY_PEM}"
EOF
}
function start-kubelet() {
make-conf-kubelet
/usr/local/bin/kubelet \
--config=/var/lib/kubelet/kubelet-config.yaml \
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
--image-pull-progress-deadline=2m \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--network-plugin=cni \
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \
--register-node=true \
--v=2
}
function make-conf-kube-proxy() {
make-proxy-kubeconfig
cat <<- EOF | tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "${KUBE_PROXY_KUBECONFIG}"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
}
function start-kube-proxy() {
make-conf-kube-proxy
/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/kube-proxy-config.yaml
}
#sleep 19
echo "ATER DEF....."
function main() {
add-to-hosts
systemctl stop docker containerd
systemctl disable docker containerd
swapoff -a
/sbin/modprobe overlay
make-ca
make-ca-config
make-worker-cert
make-worker-kubeconfig
make-conf-cni
}
function run-stuff() {
start-containerd > /tmp/containerd.log 2>&1 & echo $! > /tmp/containerd.pid
start-kubelet > /tmp/kubelet.log 2>&1 & echo $! > /tmp/kubelet.pid
start-kube-proxy > /tmp/proxy.log 2>&1 & echo $! > /tmp/proxy.pid
}
cd "${TARGET_DIR}"
main
#!/usr/bin/env bash
TARGET_DIR=/var/vm-shared/tmp/pems
KUBERNETES_PEM="${TARGET_DIR}/kubernetes.pem"
KUBERNETES_KEY_PEM="${TARGET_DIR}/kubernetes-key.pem"
CA_PEM="${TARGET_DIR}/ca.pem"
CA_KEY_PEM="${TARGET_DIR}/ca-key.pem"
CA_CSR="${TARGET_DIR}/ca.csr"
SERVICE_ACCOUNT_PEM="${TARGET_DIR}/service-account.pem"
SERVICE_ACCOUNT_KEY_PEM="${TARGET_DIR}/service-account-key.pem" # not really used it seems
KUBE_CONTROLLER_MANAGER_PEM="${TARGET_DIR}/kube-controller-manager.pem"
KUBE_CONTROLLER_MANAGER_KEY_PEM="${TARGET_DIR}/kube-controller-manager-key.pem"
KUBE_CONTROLLER_MANAGER_KUBECONFIG="${TARGET_DIR}/kube-controller-manager.kubeconfig"
KUBE_SCHEDULER_PEM="${TARGET_DIR}/kube-scheduler.pem"
KUBE_SCHEDULER_KEY_PEM="${TARGET_DIR}/kube-scheduler-key.pem"
KUBE_SCHEDULER_KUBECONFIG="${TARGET_DIR}/kube-scheduler.kubeconfig"
function get-ip() {
echo "$(ip addr show enp0s8 | grep -Po "inet \K([\d\.]+)")"
}
function make-ca() {
if [[ -f "${CA_PEM}" ]] && [[ -f "${CA_KEY_PEM}" ]] && [[ -f "${CA_CSR}" ]]
then
echo >&2 "not doing anything, CA exists!"
return
fi
local ca_csr_file="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${ca_csr_file}"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-initca "${ca_csr_file}" | cfssljson -bare ca
}
function make-ca-config() {
echo >&2 "calling the ca config function"
local ca_config_file="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${ca_config_file}"
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
echo "${ca_config_file}"
}
function make-cert-service-account() {
if [[ -f "${SERVICE_ACCOUNT_PEM}" ]] && [[ -f "${SERVICE_ACCOUNT_KEY_PEM}" ]]
then
echo >&2 "Service account pems already generated."
return
fi
#make-ca
local ca_config_file="$(make-ca-config)"
local service_account_csr="$(mktemp /tmp/YADDA_XXX)"
cat <<- EOF > "${service_account_csr}"
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}" \
-config="${ca_config_file}" \
-profile=kubernetes \
"${service_account_csr}" | cfssljson -bare service-account
}
function make-cert-kubernetes-api-server() {
make-ca
if [[ -f "${KUBERNETES_KEY_PEM}" ]] && [[ -f "${KUBERNETES_PEM}" ]]
then
echo >&2 "api server certs already exist, doing nothing!"
return
fi
local ca_config_file="$(make-ca-config)"
rm -fr kubernetes{,-key}.pem
local controller_count=${1:-1}
local X=($(for (( index=0; index < "${controller_count}"; index++ )) do echo "192.168.2.2${index}"; done))
local KUBERNETES_CONTROLLER_ADDR_STRING="$(echo "$(IFS=,; echo "${X[*]}")")"
local KUBERNETES_HOSTNAMES="kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local"
local kubernetes_csr=$(mktemp /tmp/kube-api-server-csrXXX)
cat <<-EOF > "${kubernetes_csr}"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}" \
-config="${ca_config_file}" \
-hostname=10.32.0.1,${KUBERNETES_CONTROLLER_ADDR_STRING},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
"${kubernetes_csr}" | cfssljson -bare kubernetes
}
function make-config-encryption-key() {
if [[ -f encryption-config.yaml ]]
then
echo >&2 "already created crypt config"
return
fi
local ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat <<- EOF > encryption-config.yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
}
function run-apiserver() {
/usr/local/bin/kube-apiserver --advertise-address=$(get-ip) \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file="${CA_PEM}" \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--etcd-cafile="${CA_PEM}" \
--etcd-certfile="${KUBERNETES_PEM}" \
--etcd-keyfile="${KUBERNETES_KEY_PEM}" \
--etcd-servers=https://127.0.0.1:2379 \
--event-ttl=1h \
--encryption-provider-config=encryption-config.yaml \
--kubelet-certificate-authority="${CA_PEM}" \
--kubelet-client-certificate="${KUBERNETES_PEM}" \
--kubelet-client-key="${KUBERNETES_KEY_PEM}" \
--kubelet-https=true \
--runtime-config=api/all \
--service-account-key-file="${SERVICE_ACCOUNT_PEM}" \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file="${KUBERNETES_PEM}" \
--tls-private-key-file="${KUBERNETES_KEY_PEM}" \
--v=4
}
function run-etcd() {
local name="${HOSTNAME}"
local URL="https://$(get-ip)"
/usr/local/bin/etcd --name "${name}" \
--cert-file="${KUBERNETES_PEM}" \
--key-file="${KUBERNETES_KEY_PEM}" \
--peer-cert-file="${KUBERNETES_PEM}" \
--peer-key-file="${KUBERNETES_KEY_PEM}" \
--trusted-ca-file="${CA_PEM}" \
--peer-trusted-ca-file="${CA_PEM}" \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls "${URL}:2380" \
--listen-peer-urls "${URL}:2380" \
--listen-client-urls "${URL}:2379",https://127.0.0.1:2379 \
--advertise-client-urls "${URL}:2379" \
--initial-cluster-token 'etcd-cluster-0' \
--initial-cluster="controller-0=https://192.168.2.20:2380,controller-1=https://192.168.2.21:2380,controller-2=https://192.168.2.22:2380" \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
}
function check-etcd() {
ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert="${CA_PEM}" \
--cert="${KUBERNETES_PEM}" \
--key="${KUBERNETES_KEY_PEM}"
}
function make-cert-controller-manager() {
if [[ -f "${KUBE_CONTROLLER_MANAGER_PEM}" ]] && [[ -f "${KUBE_CONTROLLER_MANAGER_KEY_PEM}" ]]
then
echo >&2 "not createing controller manager certs"
return
fi
cat > kube-controller-manager-csr.json <<- EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}"\
-config="$(make-ca-config)" \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority="${CA_PEM}" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate="${KUBE_CONTROLLER_MANAGER_PEM}" \
--client-key="${KUBE_CONTROLLER_MANAGER_KEY_PEM}" \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
function run-controller-manager() {
/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file="${CA_PEM}" \
--cluster-signing-key-file="${CA_KEY_PEM}" \
--kubeconfig="${KUBE_CONTROLLER_MANAGER_KUBECONFIG}" \
--leader-elect=true \
--root-ca-file="${CA_PEM}" \
--service-account-private-key-file="${SERVICE_ACCOUNT_KEY_PEM}" \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
}
function make-cert-scheduler() {
if [[ -f "${KUBE_SCHEDULER_PEM}" ]] && [[ -f "${KUBE_SCHEDULER_KEY_PEM}" ]]
then
echo >&2 "Kube scheduler certs already created"
fi
cat > kube-scheduler-csr.json <<- EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca="${CA_PEM}" \
-ca-key="${CA_KEY_PEM}" \
-config=$(make-ca-config) \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority="${CA_PEM}" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate="${KUBE_SCHEDULER_PEM}" \
--client-key="${KUBE_SCHEDULER_KEY_PEM}" \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
function run-scheduler() {
cat <<- EOF | tee /tmp/config.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "${KUBE_SCHEDULER_KUBECONFIG}"
leaderElection:
leaderElect: true
EOF
/usr/local/bin/kube-scheduler \
--config=/tmp/config.yaml \
--v=2
}
mkdir -p "${TARGET_DIR}"
cd "${TARGET_DIR}"
make-ca
make-cert-kubernetes-api-server 3
make-cert-service-account
make-config-encryption-key
make-cert-controller-manager
make-cert-scheduler
run-etcd &
run-apiserver &
run-controller-manager &
run-scheduler &
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
cur_state=ENV["STATE"] || "startup"
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |vb|
vb.memory="512"
end
(0..2).each do |n|
config.vm.define "controller-#{n}" do |controller|
controller.vm.box = "hardway-gamma"
controller.vm.network :private_network, ip: "192.168.2.2#{n}"
controller.vm.network :forwarded_port, guest: 22, host: 2220 + n, id: "ssh"
controller.vm.hostname = "controller-#{n}"
controller.vm.provider "virtualbox" do |vb|
vb.memory = "640"
vb.cpus = 2
end
controller.vm.synced_folder ".", "/var/vm-shared", create: true
#controller.vm.provision "shell", path: "provision.sh", args: [1,2, cur_state]
# controller.vm.provision "shell", path: "provision-controller.sh" , args: ["192.168.2.20"]
end
end
(0..1).each do |n|
config.vm.define "worker-#{n}" do |worker|
worker.vm.box = "hardway-gamma"
worker.vm.network :private_network, ip: "192.168.2.3#{n}"
worker.vm.network :forwarded_port, guest: 22, host: 2230 + n, id: "ssh"
worker.vm.hostname = "worker-#{n}"
worker.vm.provider "virtualbox" do |vb|
vb.memory = "640"
vb.cpus = 2
end
worker.vm.synced_folder ".", "/var/vm-shared", create: true
# worker.vm.provision "shell", path: "provision-worker.sh", args: ["192.168.2.30"]
end
end
#config.vm.provision "shell", path: "provision-etcd-configured.sh"
end
# -*- mode: ruby -*-
# vi: set ft=ruby :
# vagrant package --output base-alt
# vagrant box add base-alt --name hardway-2 # ----> use hardway-2 as the box!
Vagrant.configure("2") do |config|
config.vm.define :alpha do |alpha|
alpha.vm.box = "ubuntu/bionic64"
alpha.vm.network :private_network, ip: "10.0.2.10", virtualbox__intnet: true
alpha.vm.hostname = "alpha"
alpha.vm.provider "virtualbox" do |vb|
vb.memory = "4096"
vb.cpus = 2
end
alpha.vm.synced_folder ".", "/var/vm-shared", create: true
end
# config.vm.network "forwarded_port", guest:6443, host: 7443, host_ip: "127.0.0.1"
config.vm.provision "shell", path: "provision.sh"
end
@svanellewee
Copy link
Author

Build the base box as follows:

stephan:~ $  vagrant package --output base-alt
stephan:~ $  vagrant box add base-alt --name hardway-2

@svanellewee
Copy link
Author

Then point the Vagrantfile to hardway-2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment