Skip to content

Instantly share code, notes, and snippets.

@jgreat
Last active October 14, 2019 19:37
Show Gist options
  • Save jgreat/70b15d82448a7e6e7540d4d37f286e23 to your computer and use it in GitHub Desktop.
Save jgreat/70b15d82448a7e6e7540d4d37f286e23 to your computer and use it in GitHub Desktop.
Rancher install with helm

Prerequisits

Describe installs for.

  • rke
  • kubectl
  • helm

RKE

VMs

User your provider of choice to create 3 VMs. These will be the targets for RKE and the nodes of your Kubernetese cluster.
RKE will need SSH access to these VMs. The VMs will need various ports open.

VM Requirements

Linux OS SSH Docker 17.03.2

Ports

Open to the 'Public' Network

ports description
22 SSH for RKE install
80 redirect to https or accept with proper proxy headers
443 https traffic to rancher server
6443 https to kube-api, used by kubectl and helm
30000 - 32767 Kubernetes NodePorts for k8s workloads

Addition Ports Required Between Nodes

ports description
2379-2380 etcd
8472 canal networking
9009 canal networking
10250 - 10256 kublet

Configure RKE

Populate the DNS/IP address for each node.

Create an cluster.yml File

Using the sample below creat a cluster.yml file. Replace the IP Addresses in the nodes list with the IP address or DNS names of the 3 VMs you created.

nodes:
  - address: 165.227.114.63
    user: ubuntu
    role: [controlplane,worker,etcd]
  - address: 165.227.116.167
    user: ubuntu
    role: [controlplane,worker,etcd]
  - address: 165.227.127.226
    user: ubuntu
    role: [controlplane,worker,etcd]
    # internal_address: 10.10.0.1
    # ssh_key_path: /home/user/.ssh/id_rsa

Common Additional Options

option description
user a user that can run docker commands
ssh_key_path path to your ssh private key
internal_address address for internal cluster traffic

Run RKE

rke up --config ./cluster.yaml

Testing your cluster

rke should have created a file kube_config_cluster.yml. This file has the credentials for kubectl and helm.

You can copy this file to $HOME/.kube/config or if you are working with multiple Kubernetes clusters, setKUBECONFIG environmental variable to the path of kube_config_cluster.yml.

export KUBECONFIG=$(pwd)/kube_config_cluster.yml

Test you connectivity with kubectl and see if you can get the list of nodes back.

kubectl get nodes

NAME                          STATUS    ROLES                      AGE       VERSION
165.227.114.63                Ready     controlplane,etcd,worker   11m       v1.10.1
165.227.116.167               Ready     controlplane,etcd,worker   11m       v1.10.1
165.227.127.226               Ready     controlplane,etcd,worker   11m       v1.10.1

Helm

helm is the package management tool of choice for Kubernetes. helm charts provide templating syntax for Kubernetes YAML manifest documents. With helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at https://helm.sh/

Initialize Helm on your Cluster

helm installs the tiller service on your cluster to manage chart deployments. Since rke has RBAC enabled by default we will need to use kubectl to create a serviceaccount and clusterrolebinding so tiller can deploy to our cluster for us.

  • Create the ServiceAccount in the kube-system namespace.
  • Create the ClusterRoleBinding to give the tiller account access to the cluster.
  • Finally use helm to initialize the tiller service
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller

NOTE: This tiller install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the helm docs for restricting tiller access to suit your security requirements.

See https://github.com/rancher/server-chart for details.

Testing the chart.

  1. helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
  2. helm install stable/cert-manager --name cert-manager --namespace kube-system
  3. helm install rancher-stable/rancher --name rancher --namespace cattle-system ...

Scenarios to test

Most of the options are around SSL certificats

  • Rancher Generated CA Certicates
  • LetsEncrypt
  • BYO Certs - Public CA Signed
  • BYO Certs - Private CA Signed
  • External Terminated - SSL on ALB or other Load Balancer, Public CA Signed.
  • External Terminated - SSL on ALB or other Load Balancer, Private CA Signed.

Would like to talk about issues with supporting External Terminated clusters.

This requires opening up port 80/http to the world. Since all the traffic comes through the ingress on http, the ingress will very helpfully attach the proxy headers so rancher server will respond on its port 80 and everything is now in the clear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment