eksctl create cluster --name fargate --region us-east-1 --version 1.14 --fargate
[ℹ] eksctl version 0.11.1
[ℹ] using region us-east-1
[ℹ] setting availability zones to [us-east-1c us-east-1d]
[ℹ] subnets for us-east-1c - public:192.168.0.0/19 private:192.168.64.0/19
[ℹ] subnets for us-east-1d - public:192.168.32.0/19 private:192.168.96.0/19
[ℹ] using Kubernetes version 1.14
[ℹ] creating EKS cluster "fargate" in "us-east-1" region with Fargate profile
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=fargate'
[ℹ] CloudWatch logging will not be enabled for cluster "fargate" in "us-east-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-1 --cluster=fargate'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "fargate" in "us-east-1"
[ℹ] 1 task: { create cluster control plane "fargate" }
[ℹ] building cluster stack "eksctl-fargate-cluster"
[ℹ] deploying stack "eksctl-fargate-cluster"
[✔] all EKS cluster resources for "fargate" have been created
[✔] saved kubeconfig as "/Users/argu/.kube/config"
[ℹ] creating Fargate profile "fp-default" on EKS cluster "fargate"
[ℹ] created Fargate profile "fp-default" on EKS cluster "fargate"
[ℹ] "coredns" is now schedulable onto Fargate
[ℹ] "coredns" is now scheduled onto Fargate
[ℹ] "coredns" pods are now scheduled onto Fargate
[ℹ] kubectl command should work with "/Users/argu/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "fargate" in "us-east-1" region is ready
-
Create IAM OIDC provider and associate with your cluster:
$ eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster fargate --approve [ℹ] eksctl version 0.11.1 [ℹ] using region us-east-1 [ℹ] will create IAM Open ID Connect provider for cluster "fargate" in "us-east-1" [✔] created IAM Open ID Connect provider for cluster "fargate" in "us-east-1"
-
Create a service cluster role, cluster role binding, and service account for the ALB Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml
-
Create an IAM policy called
ALBIngressControllerIAMPolicy
for your worker node instance profile that allows the ALB Ingress Controller to make calls to AWS APIs on your behalf:aws iam create-policy \ --policy-name ALBIngressControllerIAMPolicy \ --policy-document https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/iam-policy.json
Take note of the policy ARN that is returned.
-
Create a service account for the ALB ingress controller and attach the policy to the service account:
eksctl create iamserviceaccount \ --region us-east-1 \ --name alb-ingress-controller \ --namespace kube-system \ --cluster fargate \ --override-existing-serviceaccounts --attach-policy-arn arn:aws:iam::<ACCOUNT-ID>:policy/ALBIngressControllerIAMPolicy \ --approve
-
Deploy ALB ingress controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/alb-ingress-controller.yaml
-
Get VPC id of the cluster:
aws eks describe-cluster --name fargate --query "cluster.resourcesVpcConfig.vpcId" --output text
-
Edit ALB Ingress Controller manifest:
kubectl edit deployment.apps/alb-ingress-controller -n kube-system
-
Configure ALB ingress controller with the VPC:
spec: containers: - args: - --ingress-class=alb - --cluster-name=fargate - --aws-vpc-id=vpc-01ee0094fa1aabd1e - --aws-region=us-east-1
-
Create deployment, service, and ingress:
$ kubectl create -f deployment.yaml deployment.apps/web created $ kubectl create -f service.yaml service/service created $ kubectl create -f ingress.yaml ingress.extensions/nginx-ingress created
-
Get ingress name:
$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * acdf8e79-default-nginxingr-29e9-1879877420.us-east-1.elb.amazonaws.com 80 10s
Access the endpoint.
-
Check pods:
$ kubectl get pods NAME READY STATUS RESTARTS AGE web-5c7865dfb5-47nn4 1/1 Running 0 45m web-5c7865dfb5-9stsd 1/1 Running 0 45m web-5c7865dfb5-b885z 1/1 Running 0 45m web-5c7865dfb5-ftcp8 1/1 Running 0 45m web-5c7865dfb5-xpptz 1/1 Running 0 45m
-
Check nodes:
fargate $ kubectl get nodes NAME STATUS ROLES AGE VERSION fargate-ip-192-168-107-240.ec2.internal Ready <none> 44m v1.14.8-eks fargate-ip-192-168-118-191.ec2.internal Ready <none> 44m v1.14.8-eks fargate-ip-192-168-119-140.ec2.internal Ready <none> 44m v1.14.8-eks fargate-ip-192-168-119-148.ec2.internal Ready <none> 44m v1.14.8-eks fargate-ip-192-168-124-80.ec2.internal Ready <none> 44m v1.14.8-eks fargate-ip-192-168-72-33.ec2.internal Ready <none> 117m v1.14.8-eks fargate-ip-192-168-82-15.ec2.internal Ready <none> 15m v1.14.8-eks fargate-ip-192-168-86-76.ec2.internal Ready <none> 117m v1.14.8-eks
-
Get logs:
kubectl logs -n kube-system deployment.apps/alb-ingress-controller