Describe installs for.
- rke
- kubectl
- helm
User your provider of choice to create 3 VMs. These will be the targets for RKE and the nodes of your Kubernetese cluster.
RKE will need SSH access to these VMs. The VMs will need various ports open.
Linux OS SSH Docker 17.03.2
ports | description |
---|---|
22 | SSH for RKE install |
80 | redirect to https or accept with proper proxy headers |
443 | https traffic to rancher server |
6443 | https to kube-api, used by kubectl and helm |
30000 - 32767 | Kubernetes NodePorts for k8s workloads |
ports | description |
---|---|
2379-2380 | etcd |
8472 | canal networking |
9009 | canal networking |
10250 - 10256 | kublet |
Populate the DNS/IP address for each node.
Using the sample below creat a cluster.yml
file. Replace the IP Addresses in the nodes
list with the IP address or DNS names of the 3 VMs you created.
nodes:
- address: 165.227.114.63
user: ubuntu
role: [controlplane,worker,etcd]
- address: 165.227.116.167
user: ubuntu
role: [controlplane,worker,etcd]
- address: 165.227.127.226
user: ubuntu
role: [controlplane,worker,etcd]
# internal_address: 10.10.0.1
# ssh_key_path: /home/user/.ssh/id_rsa
option | description |
---|---|
user |
a user that can run docker commands |
ssh_key_path |
path to your ssh private key |
internal_address |
address for internal cluster traffic |
rke up --config ./cluster.yaml
rke
should have created a file kube_config_cluster.yml
. This file has the credentials for kubectl
and helm
.
You can copy this file to $HOME/.kube/config
or if you are working with multiple Kubernetes clusters, setKUBECONFIG
environmental variable to the path of kube_config_cluster.yml
.
export KUBECONFIG=$(pwd)/kube_config_cluster.yml
Test you connectivity with kubectl
and see if you can get the list of nodes back.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
165.227.114.63 Ready controlplane,etcd,worker 11m v1.10.1
165.227.116.167 Ready controlplane,etcd,worker 11m v1.10.1
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
helm
is the package management tool of choice for Kubernetes. helm
charts
provide templating syntax for Kubernetes YAML manifest documents. With helm
we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at https://helm.sh/
helm
installs the tiller
service on your cluster to manage chart
deployments. Since rke
has RBAC enabled by default we will need to use kubectl
to create a serviceaccount
and clusterrolebinding
so tiller
can deploy to our cluster for us.
ServiceAccount
in the kube-system
namespace.ClusterRoleBinding
to give the tiller
account access to the cluster.helm
to initialize the tiller
servicekubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
NOTE: This
tiller
install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the helm docs for restrictingtiller
access to suit your security requirements.
See https://github.com/rancher/server-chart for details.
Testing the chart.
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm install stable/cert-manager --name cert-manager --namespace kube-system
helm install rancher-stable/rancher --name rancher --namespace cattle-system ...
Most of the options are around SSL certificats
Would like to talk about issues with supporting External Terminated clusters.
This requires opening up port 80/http to the world. Since all the traffic comes through the ingress on http, the ingress will very helpfully attach the proxy headers so rancher server will respond on its port 80 and everything is now in the clear.