Skip to content

Instantly share code, notes, and snippets.

@MitchPierias
Last active April 2, 2019 13:07
Show Gist options
  • Save MitchPierias/13dc19832ea983345eaed9f1070e5029 to your computer and use it in GitHub Desktop.
Save MitchPierias/13dc19832ea983345eaed9f1070e5029 to your computer and use it in GitHub Desktop.
Kubernetes Setup for MacOS

Amazon AWS Kubernetes Setup

Amazon's manage Kubernetes service, EKS, uses kubectl along with aws-iam-authenticator extension for clsuter authentication. This gist assumes you have the AWS command line tools and SDK already installed and configured. The aws-iam-authenticator uses the same AWS credential provider chain shared by aws-cli and aws-sdk. To check the configured credentials, run

aws sts get-caller-identity

Installation

Download and install Kubernetes Kubernetes download and install instructions

Download aws-aim-authenticator for Kubernetes

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/darwin/amd64/aws-iam-authenticator

Download and install SHA-256 sum for aws-aim-authenticator

curl -o aws-iam-authenticator.sha256 https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/darwin/amd64/aws-iam-authenticator.sha256

Modify the execution permisions of the aws-iam-authenticator binary

chmod +x ./aws-iam-authenticator

Craete a $HOME/bin/aws-iam-authenticator directory and copy the binaries here

mkdir $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH

Add the directory to your PATH variables

echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile

And finally test aws-iam-authenticator works

aws-iam-authenticator help

Starting an EKS Cluster

I had troubles using the EKS management console together with the aws eks update-config cli tool. The management console apparently creates the cluster with the current user as super admin, although EKS doesn't allow users with full admin access to manage clusters. In my circumstances, this approach fails. Instead we will be doing things through the AWS-CLI and manually configuring our ~/.kube/config. To being, let's create a new AWS EKS cluster using the aws-cli.

aws eks create-cluster --name CLUSTER_NAME --role-arn CLUSTER_ROLE_ARN --resources-vpc-config subnetIds=<comma-seperated-subnet-ids>,securityGroupIds=<comma-seperated-security-group-ids> 

If everything works you should receive a json object containing cluster information, meanwhile you'll have to wait around 10 minutes for the cluster to start. The command aws eks --region REGION describe-cluster --name CLUSTER_NAME --query cluster.status will output the boot status of the cluster, or you can visit the management console and continually hit refresh. The best idea is to just take your lunch break now.

Configuring Kubeconfig

Now create and edit your Kubeconfig file

nano ~/.kube/config

And insert the following

apiVersion: v1
clusters:
- cluster:
    server: API_SERVER_ENDPOINT
    certificate-authority-data: CERTIFICATE_AUTHORITY
  name: CLUSTER_NAME
contexts:
- context:
    cluster: CLUSTER_NAME
    user: SUPER_ADMIN_USER_NAME
  name: CLUSTER_NAME
current-context: CLUSTER_NAME
kind: Config
preferences: {}
users:
- name: SUPER_ADMIN_USER_NAME
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "CLUSTER_NAME"

You can now test your kube configuration

kubectl get service
kubectl get nodes

The second command should return nothing, that's because we don't yet have any nodes running, we will cover that next.

Creating a worker stack

We need to create a worker stack for our cluster. Navigate to cloudformation and start a new stack using the following configuration script.

https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yaml

Configuration Map

First we will check to see if a coniguration map has been applied, where auth-aws is the name of your config map in the next step.

kubectl describe configmap -n kube-system auth-aws

Now we will define a configuration map for our cluster. Create a file for the configuration map auth-aws.yml.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Now we can apply our config map

kubectl apply -f auth-aws.yml

Finally watch the status of nodes until they are ready

kubectl get nodes --watch

Installing Helm

We will start by creating a service for helm called tiller

kubectl create serviceaccount --namespace kube-system tiller

Next we need to grant tiller permission to modify the cluster

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

And then we will initialize helm

helm init --service-account tiller
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment