Skip to content

Instantly share code, notes, and snippets.

@wallnerryan
Last active March 28, 2018 00:56
Show Gist options
  • Save wallnerryan/735775b9da55519c28b7fc78458735ef to your computer and use it in GitHub Desktop.
Save wallnerryan/735775b9da55519c28b7fc78458735ef to your computer and use it in GitHub Desktop.
exact commands ran to standup k8s 1.9.x on gcp with cni / rbac.

Kubernetes the hard-way

https://github.com/kelseyhightower/kubernetes-the-hard-way

GCE NETWORKING FOR K8S

gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom

gcloud compute networks subnets create kubernetes \
  --network kubernetes-the-hard-way \
  --range 10.240.0.0/24

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
  --allow tcp,udp,icmp \
  --network kubernetes-the-hard-way \
  --source-ranges 10.240.0.0/24,10.200.0.0/16

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
  --allow tcp:22,tcp:6443,icmp \
  --network kubernetes-the-hard-way \
  --source-ranges 0.0.0.0/0

get ip for external lb for apiservers

gcloud compute addresses create kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region)

create 3 Controllers, Ubuntu

for i in 0 1 2; do
  gcloud compute instances create controller-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1604-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --private-network-ip 10.240.0.1${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-hard-way,controller
done

List them

gcloud compute instances list

create Workers

each worker gets a subnet out of 10.200.0.0/16

for i in 0 1 2; do
  gcloud compute instances create worker-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1604-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --metadata pod-cidr=10.200.${i}.0/24 \
    --private-network-ip 10.240.0.2${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-hard-way,worker
done

list them

gcloud compute instances list

Prepare ansible yaml

NOTE YOU MUST USE https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys

echo "[k8snodes]" > k8shardway.yml 
gcloud compute instances list --format=json     | jq '.[].networkInterfaces[].accessConfigs[].natIP' >> k8shardway.yml 

Now, use ansible

ansible k8snodes -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "ip addr show"

SSL for etcd, k8s api etc

CloudFlare https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md

create SSL CA config

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

CA Signing Request

cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Boston",
      "O": "Kubernetes",
      "OU": "MA",
      "ST": "Massachusetts"
    }
  ]
}
EOF

Generate CA cert and private key

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

Create Admin cert

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Boston",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Massachusetts"
    }
  ]
}
EOF

Generate Admin Cert

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin

Create worker node certs

for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Boston",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Massachusetts"
    }
  ]
}
EOF

EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')

INTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].networkIP)')

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
  -profile=kubernetes \
  ${instance}-csr.json | cfssljson -bare ${instance}
done

Create Kube Proxy client signing req

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Boston",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Massachusetts"
    }
  ]
}
EOF

Generate kube proxy cert

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy

The Kubernetes API Server Certificate

The kubernetes-the-hard-way static IP address will be included in the list of subject alternative names for the Kubernetes APIServer certificate. This will ensure the certificate can be validated by remote clients.

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')

Create apiserver csr

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Boston",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Massachusetts"
    }
  ]
}
EOF

Generate apiserver cert

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

copy certs to controller nodes

for instance in worker-0 worker-1 worker-2; do
  gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
done

output

"""
Warning: Permanently added 'compute.3919622658747123252' (ECDSA) to the list of known hosts.
ca.pem                                                                          100% 1330     1.3KB/s   00:00    
worker-0-key.pem                                                                100% 1679     1.6KB/s   00:00    
worker-0.pem                                                                    100% 1509     1.5KB/s   00:00    


Warning: Permanently added 'compute.3081782171177961196' (ECDSA) to the list of known hosts.
ca.pem                                                                          100% 1330     1.3KB/s   00:00    
ca-key.pem                                                                      100% 1679     1.6KB/s   00:00    
kubernetes-key.pem                                                              100% 1679     1.6KB/s   00:00    
kubernetes.pem                                                                  100% 1537     1.5KB/s   00:00    
"""

The kube-proxy and kubelet client certificates will be used to generate client authentication configuration files in the next lab. Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.

Retrieve the kubernetes-the-hard-way static IP address:

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')

Create kubeconfigs for workers

When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.

for instance in worker-0 worker-1 worker-2; do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done

Create kubeconfig for Kube Proxy

kubectl config set-cluster kubernetes-the-hard-way \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes-the-hard-way \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

Distribute the kubeconfig files for the workers

for instance in worker-0 worker-1 worker-2; do
  gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done

Etcd ENCRYPTION

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

encryption yml

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

Copy encryption file to each controller

for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp encryption-config.yaml ${instance}:~/
done

Add a [k8scontrollers] group for ansible yml

echo "" >> k8shardway.yml; echo "[k8scontrollers]" >> k8shardway.yml; gcloud compute instances list --regexp "^controller.*" --format=json     | jq '.[].networkInterfaces[].accessConfigs[].natIP' >> k8shardway.yml

ETCD

download etcd on controllers

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "wget -q --show-progress --https-only --timestamping  'https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz'"

extract

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "tar -xvf etcd-v3.2.11-linux-amd64.tar.gz"

move

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin/"

configure

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mkdir -p /etc/etcd /var/lib/etcd"

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/"

Run script to configure ETCD

for instance in controller-0 controller-1 controller-2; do

INTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].networkIP)')

cat > etcd.service.${instance} <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
  --name ${instance} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

gcloud compute scp etcd.service.${instance}  ${instance}:~/

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a " mv etcd.service.${instance} /etc/systemd/system/etcd.service"

done

Then start etcd

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl daemon-reload"

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl enable etcd"

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl start etcd"

Make sure it etcd is running

gcloud compute ssh controller-0 --command "ETCDCTL_API=3 etcdctl member list"
#3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
#f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
#ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379

SETUP THE CONTROLLERS

Get Binaries

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "wget -q --show-progress --https-only --timestamping https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl"

chmod the files

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl"

move to appropriate place

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/"

Configure the API Server

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mkdir -p /var/lib/kubernetes/"

Move kubernetes pem files and encryption config

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/"

Configure kube- apiserver controller-manager scheduler.service

for instance in controller-0 controller-1 controller-2; do

INTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].networkIP)')

cat > kube-apiserver.service.${instance} <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-swagger-ui=true \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
  --event-ttl=1h \\
  --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --insecure-bind-address=127.0.0.1 \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --kubelet-https=true \\
  --runtime-config=api/all \\
  --service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-ca-file=/var/lib/kubernetes/ca.pem \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

cat > kube-controller-manager.service.${instance} <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --leader-elect=true \\
  --master=http://127.0.0.1:8080 \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

cat > kube-scheduler.service.${instance} <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --leader-elect=true \\
  --master=http://127.0.0.1:8080 \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF


gcloud compute scp kube-apiserver.service.${instance}  ${instance}:~/kube-apiserver.service
gcloud compute scp kube-controller-manager.service.${instance}   ${instance}:~/kube-controller-manager.service
gcloud compute scp kube-scheduler.service.${instance}  ${instance}:~/kube-scheduler.service

done

Upload and start the services

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/"

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl daemon-reload"

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl enable kube-apiserver kube-controller-manager kube-scheduler"

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl start kube-apiserver kube-controller-manager kube-scheduler"

Check component health

~/kubernetes/k8s-LFS258$ ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "kubectl get componentstatuses"
35.229.47.216 | SUCCESS | rc=0 >>
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   

35.227.49.217 | SUCCESS | rc=0 >>
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   

35.231.7.77 | SUCCESS | rc=0 >>
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   

RBAC for Kubelet Authorization Cluster Role

https://kubernetes.io/docs/admin/kubelet-authentication-authorization/

for instance in controller-0 controller-1 controller-2; do

cat > kube-apiserver-to-kublete.${instance} <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1 
kind: ClusterRole 
metadata: 
  annotations: 
    rbac.authorization.kubernetes.io/autoupdate: "true" 
  labels: 
    kubernetes.io/bootstrapping: rbac-defaults 
  name: system:kube-apiserver-to-kubelet 
rules: 
  - apiGroups: 
      - "" 
    resources: 
      - nodes/proxy 
      - nodes/stats 
      - nodes/log 
      - nodes/spec 
      - nodes/metrics 
    verbs: 
      - "*" 
EOF

gcloud compute scp kube-apiserver-to-kublete.${instance}  ${instance}:~/kube-apiserver-to-kublete.yaml

done

apply the rbac via kubctl

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a " kubectl apply -f kube-apiserver-to-kublete.yaml"

Create Cluster Role Binding

for instance in controller-0 controller-1 controller-2; do

cat > kube-apiserver-to-kublete-binding.${instance} <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

gcloud compute scp kube-apiserver-to-kublete-binding.${instance}  ${instance}:~/kube-apiserver-to-kublete-binding.yaml

done

create the rbac with kubectl

ansible k8scontrollers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a " kubectl apply -f kube-apiserver-to-kublete-binding.yaml"

The Kubernetes Frontend Load Balancer

create the target pool

gcloud compute target-pools create kubernetes-target-pool


gcloud compute target-pools add-instances kubernetes-target-pool \
  --instances controller-0,controller-1,controller-2

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(name)')

gcloud compute forwarding-rules create kubernetes-forwarding-rule \
  --address ${KUBERNETES_PUBLIC_ADDRESS} \
  --ports 6443 \
  --region $(gcloud config get-value compute/region) \
  --target-pool kubernetes-target-pool

Verify

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')

curl --cacert cfssl/ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version

output

~/kubernetes/k8s-LFS258$ curl --cacert cfssl/ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
{
  "major": "1",
  "minor": "9",
  "gitVersion": "v1.9.0",
  "gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
  "gitTreeState": "clean",
  "buildDate": "2017-12-15T20:55:30Z",
  "goVersion": "go1.9.2",
  "compiler": "gc",
  "platform": "linux/amd64"
}

Kubernetes Workers

add [k8sworkers]

echo "" >> k8shardway.yml; echo "[k8sworkers]" >> k8shardway.yml; gcloud compute instances list --regexp "^worker.*" --format=json     | jq '.[].networkInterfaces[].accessConfigs[].natIP' >> k8shardway.yml

The socat binary enables support for the kubectl port-forward command.

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "apt-get install -y socat"

Get binaries for workers

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "wget -q --show-progress --https-only --timestamping https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz https://github.com/containerd/cri-containerd/releases/download/v1.0.0-beta.1/cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubelet"

create needed directories

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mkdir -p /etc/cni/net.d /opt/cni/bin /var/lib/kubelet /var/lib/kube-proxy /var/lib/kubernetes /var/run/kubernetes"

install worker binaries


ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/"

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "tar -xvf cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz -C /"

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "chmod +x kubectl kube-proxy kubelet"

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mv kubectl kube-proxy kubelet /usr/local/bin/"

configure CNI networking on each worker

for instance in worker-0 worker-1 worker-2; do

POD_CIDR=$(gcloud compute instances describe ${instance} --format 'value(metadata[items][0].value)')

cat > 10-bridge.conf.${instance} <<EOF
{
    "cniVersion": "0.3.1",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "${POD_CIDR}"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }
}
EOF

cat > 99-loopback.conf.${instance} <<EOF
{
    "cniVersion": "0.3.1",
    "type": "loopback"
}
EOF

gcloud compute scp 10-bridge.conf.${instance}  ${instance}:~/10-bridge.conf
gcloud compute scp 99-loopback.conf.${instance}  ${instance}:~/99-loopback.conf

done

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/"

move the pem files and configure the kubelet and kubeproxy

for instance in worker-0 worker-1 worker-2; do

POD_CIDR=$(gcloud compute instances describe ${instance} --format 'value(metadata[items][0].value)')

gcloud compute ssh ${instance} --command "sudo mv ${instance}-key.pem ${instance}.pem /var/lib/kubelet/"
gcloud compute ssh ${instance} --command "sudo mv ${instance}.kubeconfig /var/lib/kubelet/kubeconfig"
gcloud compute ssh ${instance} --command "sudo mv ca.pem /var/lib/kubernetes/"

cat > kubelet.service.${instance} <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=cri-containerd.service
Requires=cri-containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --allow-privileged=true \\
  --anonymous-auth=false \\
  --authorization-mode=Webhook \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --cloud-provider= \\
  --cluster-dns=10.32.0.10 \\
  --cluster-domain=cluster.local \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --pod-cidr=${POD_CIDR} \\
  --register-node=true \\
  --runtime-request-timeout=15m \\
  --tls-cert-file=/var/lib/kubelet/${instance}.pem \\
  --tls-private-key-file=/var/lib/kubelet/${instance}-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

gcloud compute scp kubelet.service.${instance}  ${instance}:~/kubelet.service
gcloud compute ssh ${instance} --command "sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig"

cat > kube-proxy.service.${instance} <<EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --cluster-cidr=10.200.0.0/16 \\
  --kubeconfig=/var/lib/kube-proxy/kubeconfig \\
  --proxy-mode=iptables \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

gcloud compute scp kube-proxy.service.${instance} ${instance}:~/kube-proxy.service
gcloud compute ssh ${instance} --command "sudo mv kubelet.service kube-proxy.service /etc/systemd/system/"

done

Start the worker instance

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl daemon-reload"

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl enable containerd cri-containerd kubelet kube-proxy"

ansible k8sworkers -i k8shardway.yml --private-key ~/.ssh/wallnerryan -m shell -b -a "systemctl start containerd cri-containerd kubelet kube-proxy"

verify

gcloud compute ssh controller-0 --command "kubectl get nodes"
NAME       STATUS    ROLES     AGE       VERSION
worker-0   Ready     <none>    1m        v1.9.0
worker-1   Ready     <none>    1m        v1.9.0
worker-2   Ready     <none>    1m        v1.9.0

Configuring kubectl for Remote Access

generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.

cd cfssl/

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')

kubectl config set-cluster kubernetes-the-hard-way \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443

kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem

kubectl config set-context kubernetes-the-hard-way \
  --cluster=kubernetes-the-hard-way \
  --user=admin

kubectl config use-context kubernetes-the-hard-way

verify

kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   

kubectl get no
NAME       STATUS    ROLES     AGE       VERSION
worker-0   Ready     <none>    6m        v1.9.0
worker-1   Ready     <none>    6m        v1.9.0
worker-2   Ready     <none>    6m        v1.9.0

Provisioning Pod Network Routes

pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes.

create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.

Print the internal IP address and Pod CIDR range for each worker instance

for instance in worker-0 worker-1 worker-2; do
  gcloud compute instances describe ${instance} \
    --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done

Create network routes for each worker instance

for i in 0 1 2; do
  gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
    --network kubernetes-the-hard-way \
    --next-hop-address 10.240.0.2${i} \
    --destination-range 10.200.${i}.0/24
done

verify the routes

gcloud compute routes list --filter "network: kubernetes-the-hard-way"
NAME                            NETWORK                  DEST_RANGE     NEXT_HOP                  PRIORITY
default-route-0edad0fe2b86100a  kubernetes-the-hard-way  10.240.0.0/24                            1000
default-route-6002e0aa4af8ff93  kubernetes-the-hard-way  0.0.0.0/0      default-internet-gateway  1000
kubernetes-route-10-200-0-0-24  kubernetes-the-hard-way  10.200.0.0/24  10.240.0.20               1000
kubernetes-route-10-200-1-0-24  kubernetes-the-hard-way  10.200.1.0/24  10.240.0.21               1000
kubernetes-route-10-200-2-0-24  kubernetes-the-hard-way  10.200.2.0/24  10.240.0.22               1000

Deploying the DNS Cluster Add-on

kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME                        READY     STATUS    RESTARTS   AGE
kube-dns-6c857864fb-vthjj   3/3       Running   0          27s

test dns

kubectl run busybox --image=busybox --command -- sleep 3600

kubectl get pods -l run=busybox

POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")

kubectl exec -ti $POD_NAME -- nslookup kubernetes

output

Server:    10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local

Test a service

kubectl run nginx --image=nginx

POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")

kubectl port-forward $POD_NAME 8080:80

curl --head http://127.0.0.1:8080 

kubectl expose deployment nginx --port 80 --type NodePort
service "nginx" exposed

NODE_PORT=$(kubectl get svc nginx \
  --output=jsonpath='{range .spec.ports[0]}{.nodePort}')

echo $NODE_PORT

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
  --allow=tcp:${NODE_PORT} \
  --network kubernetes-the-hard-way

Retrieve the external IP address of a worker instance:

EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
#HTTP/1.1 200 OK
#Server: nginx/1.13.9
#Date: Wed, 14 Mar 2018 00:35:07 GMT
#Content-Type: text/html
#Content-Length: 612
#Last-Modified: Tue, 20 Feb 2018 12:21:20 GMT
#Connection: keep-alive
#ETag: "5a8c12c0-264"
#Accept-Ranges: bytes

Add Dashboard

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

# then run kubectl proxy and access via http://localhost:8001/ui
# you will then be asked to provide bearer token or kubeconfig file for a user/admin
# use https://github.com/kubernetes/dashboard/wiki/Access-control#getting-token-with-kubectl 

or create a user and use its token

cat > admin.user <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF

cat > admin.clusterrolebinding <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

kubectl create -f admin.user
kubectl create -f admin.clusterrolebinding 

use the token: output in the below command

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

#Ingress

https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md

Add mondetory configs

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \
    | kubectl apply -f -

Enable rbac profiles

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \
    | kubectl apply -f -

GCP LoadBlancer didnt work, so use as if baremetal

cat > nginx-ingress-svc.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30147
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    nodePort: 32267
    protocol: TCP
  selector:
    app: ingress-nginx
EOF

kubectl apply -f nginx-ingress-svc.yml

Open ports for nodeport to work

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-ingress-service \
  --allow=tcp:32267,tcp:30147 \
  --network kubernetes-the-hard-way

deploy a ruleset for nginx

cat > nginx-ingress-host-rewrite.yml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress-host-rewrite
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
 rules:
  - host: foo.bar.com
    http:
     paths:
      - path: /fooatbar
        backend:
          serviceName: nginx
          servicePort: 80
EOF

kubectl create -f nginx-ingress-host-rewrite.yml

Test ingress is working

Find what worker its on

kubectl describe po nginx-ingress-controller-9c7b694-mgg2w -n ingress-nginx | grep Node
Node:           worker-1/10.240.0.21
Node-Selectors:  <none>

get worker public ip

EXTERNAL_IP=$(gcloud compute instances describe worker-1 \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
curl -v -H "Host: foo.bar.com" http://${EXTERNAL_IP}:30147/fooatbar

Cleaning Up

In this lab you will delete the compute resources created during this tutorial.

Compute Instances

Delete the controller and worker compute instances:

gcloud -q compute instances delete \
  controller-0 controller-1 controller-2 \
  worker-0 worker-1 worker-2

Networking

Delete the external load balancer network resources:

gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
  --region $(gcloud config get-value compute/region)
gcloud -q compute target-pools delete kubernetes-target-pool

Delete the kubernetes-the-hard-way static IP address:

gcloud -q compute addresses delete kubernetes-the-hard-way

Delete the kubernetes-the-hard-way firewall rules:

gcloud -q compute firewall-rules delete \
  kubernetes-the-hard-way-allow-nginx-service \
  kubernetes-the-hard-way-allow-internal \
  kubernetes-the-hard-way-allow-external

Delete the Pod network routes:

gcloud -q compute routes delete \
  kubernetes-route-10-200-0-0-24 \
  kubernetes-route-10-200-1-0-24 \
  kubernetes-route-10-200-2-0-24

Delete the kubernetes subnet:

gcloud -q compute networks subnets delete kubernetes

Delete the kubernetes-the-hard-way network VPC:

gcloud -q compute networks delete kubernetes-the-hard-way
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment