Skip to content

Instantly share code, notes, and snippets.

@allenmichael
Last active October 9, 2019 18:36
Show Gist options
  • Select an option

  • Save allenmichael/b72caa6dbd31bb16dba46b8355b578b6 to your computer and use it in GitHub Desktop.

Select an option

Save allenmichael/b72caa6dbd31bb16dba46b8355b578b6 to your computer and use it in GitHub Desktop.
Gremlin GameDay Starter
AWSTemplateFormatVersion: "2010-09-09"
Description: >-
This template creates a Multi-AZ, multi-subnet VPC infrastructure with managed NAT
gateways in the public subnet for each Availability Zone. You can also create additional
private subnets with dedicated custom network access control lists (ACLs). If you
deploy the Quick Start in a region that doesn't support NAT gateways, NAT instances
are deployed instead. **WARNING** This template creates AWS resources. You will
be billed for the AWS resources used if you create a stack from this template. QS(0027)
Resources:
##### START VPC RESOURCES #####
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
InstanceTenancy: default
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: BelongsTo
Value: !Ref "AWS::StackName"
- Key: Name
Value: GremlinGameDay/Gremlin/DefaultVpc
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Ref "AWS::StackName"
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref "VPC"
InternetGatewayId: !Ref "InternetGateway"
PrivateSubnet1A:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.0.0/19
AvailabilityZone: us-east-1a
PrivateSubnet2A:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.32.0/19
AvailabilityZone: us-east-1b
PrivateSubnet3A:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.64.0/19
AvailabilityZone: us-east-1c
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.96.0/19
AvailabilityZone: us-east-1a
MapPublicIpOnLaunch: true
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.128.0/19
AvailabilityZone: us-east-1b
MapPublicIpOnLaunch: true
PublicSubnet3:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.160.0/19
AvailabilityZone: us-east-1c
MapPublicIpOnLaunch: true
PrivateSubnet1ARouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Private subnet 1A
- Key: Network
Value: Private
PrivateSubnet1ARoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PrivateSubnet1ARouteTable"
DestinationCidrBlock: "0.0.0.0/0"
NatGatewayId: !Ref "NATGateway1"
PrivateSubnet1ARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PrivateSubnet1A"
RouteTableId: !Ref "PrivateSubnet1ARouteTable"
PrivateSubnet2ARouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Private subnet 2A
- Key: Network
Value: Private
PrivateSubnet2ARoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PrivateSubnet2ARouteTable"
DestinationCidrBlock: "0.0.0.0/0"
NatGatewayId: !Ref "NATGateway2"
PrivateSubnet2ARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PrivateSubnet2A"
RouteTableId: !Ref "PrivateSubnet2ARouteTable"
PrivateSubnet3ARouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Private subnet 3A
- Key: Network
Value: Private
PrivateSubnet3ARoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PrivateSubnet3ARouteTable"
DestinationCidrBlock: "0.0.0.0/0"
NatGatewayId: !Ref "NATGateway3"
PrivateSubnet3ARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PrivateSubnet3A"
RouteTableId: !Ref "PrivateSubnet3ARouteTable"
PublicSubnetRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Public Subnets
- Key: Network
Value: Public
PublicSubnetRoute:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PublicSubnetRouteTable"
DestinationCidrBlock: "0.0.0.0/0"
GatewayId: !Ref "InternetGateway"
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PublicSubnet1"
RouteTableId: !Ref "PublicSubnetRouteTable"
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PublicSubnet2"
RouteTableId: !Ref "PublicSubnetRouteTable"
PublicSubnet3RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PublicSubnet3"
RouteTableId: !Ref "PublicSubnetRouteTable"
NAT1EIP:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NAT2EIP:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NAT3EIP:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NATGateway1:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt "NAT1EIP.AllocationId"
SubnetId: !Ref "PublicSubnet1"
NATGateway2:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt "NAT2EIP.AllocationId"
SubnetId: !Ref "PublicSubnet2"
NATGateway3:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt "NAT3EIP.AllocationId"
SubnetId: !Ref "PublicSubnet3"
##### END VPC RESOURCES #####
##### START SECURITY GROUPS #####
ControlPlaneSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster communication
VpcId: !Ref "VPC"
NodeSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for all nodes in the node group
VpcId: !Ref "VPC"
NodeSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow nodes to communicate with each other
GroupId: !Ref NodeSecurityGroup
SourceSecurityGroupId: !Ref NodeSecurityGroup
IpProtocol: '-1'
FromPort: 0
ToPort: 65535
NodeSecurityGroupFromControlPlaneIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
GroupId: !Ref NodeSecurityGroup
SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
IpProtocol: tcp
FromPort: 1025
ToPort: 65535
ControlPlaneEgressToNodeSecurityGroup:
Type: AWS::EC2::SecurityGroupEgress
Properties:
Description: Allow the cluster control plane to communicate with worker Kubelet and pods
GroupId: !Ref ControlPlaneSecurityGroup
DestinationSecurityGroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
FromPort: 1025
ToPort: 65535
NodeSecurityGroupFromControlPlaneOn443Ingress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow pods running extension API servers on port 443 to receive communication from cluster control plane
GroupId: !Ref NodeSecurityGroup
SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
IpProtocol: tcp
FromPort: 443
ToPort: 443
ControlPlaneEgressToNodeSecurityGroupOn443:
Type: AWS::EC2::SecurityGroupEgress
Properties:
Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
GroupId: !Ref ControlPlaneSecurityGroup
DestinationSecurityGroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
FromPort: 443
ToPort: 443
ClusterControlPlaneSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Allow pods to communicate with the cluster API Server
GroupId: !Ref ControlPlaneSecurityGroup
SourceSecurityGroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
ToPort: 443
FromPort: 443
##### END SECURITY GROUPS #####
##### START IAM ROLES #####
ControlPlaneRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: eks.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
- arn:aws:iam::aws:policy/AmazonEKSServicePolicy
##### END IAM ROLES #####
##### START EKS RESOURCES #####
EKS:
Type: "AWS::EKS::Cluster"
Properties:
ResourcesVpcConfig:
SecurityGroupIds:
- !Ref ControlPlaneSecurityGroup
SubnetIds:
- !Ref PrivateSubnet1A
- !Ref PrivateSubnet2A
- !Ref PrivateSubnet3A
- !Ref PublicSubnet1
- !Ref PublicSubnet2
- !Ref PublicSubnet3
RoleArn: !GetAtt ControlPlaneRole.Arn
Version: "1.13"
NodeInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: "/"
Roles:
- !Ref NodeInstanceRole
NodeInstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
NodeGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
DesiredCapacity: 3
LaunchConfigurationName: !Ref NodeLaunchConfig
MinSize: 3
MaxSize: 3
VPCZoneIdentifier:
- !Ref PrivateSubnet1A
- !Ref PrivateSubnet2A
- !Ref PrivateSubnet3A
CreationPolicy:
ResourceSignal:
Count: 3
Timeout: PT15M
UpdatePolicy:
AutoScalingRollingUpdate:
MinInstancesInService: 1
MaxBatchSize: 1
WaitOnResourceSignals : true
PauseTime: PT15M
NodeLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: ami-08198f90fe8bc57f0
InstanceType: m5.large
IamInstanceProfile:
Ref: NodeInstanceProfile
SecurityGroups:
- !Ref NodeSecurityGroup
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${EKS}
/opt/aws/bin/cfn-signal --exit-code $? \
--stack ${AWS::StackName} \
--resource NodeGroup \
--region ${AWS::Region}
Outputs:
NAT1EIP:
Description: NAT 1 IP address
Value: !Ref "NAT1EIP"
Export:
Name: !Sub "${AWS::StackName}-NAT1EIP"
NAT2EIP:
Description: NAT 2 IP address
Value: !Ref "NAT2EIP"
Export:
Name: !Sub "${AWS::StackName}-NAT2EIP"
NAT3EIP:
Description: NAT 3 IP address
Value: !Ref "NAT3EIP"
Export:
Name: !Sub "${AWS::StackName}-NAT3EIP"
PrivateSubnet1AID:
Description: Private subnet 1A ID in Availability Zone 1
Value: !Ref "PrivateSubnet1A"
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet1AID"
PrivateSubnet2AID:
Description: Private subnet 2A ID in Availability Zone 2
Value: !Ref "PrivateSubnet2A"
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet2AID"
PrivateSubnet3AID:
Description: Private subnet 3A ID in Availability Zone 3
Value: !Ref "PrivateSubnet3A"
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet3AID"
PublicSubnet1ID:
Description: Public subnet 1 ID in Availability Zone 1
Value: !Ref "PublicSubnet1"
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet1ID"
PublicSubnet2ID:
Description: Public subnet 2 ID in Availability Zone 2
Value: !Ref "PublicSubnet2"
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet2ID"
PublicSubnet3ID:
Description: Public subnet 3 ID in Availability Zone 3
Value: !Ref "PublicSubnet3"
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet3ID"
PrivateSubnet1ARouteTable:
Value: !Ref "PrivateSubnet1ARouteTable"
Description: Private subnet 1A route table
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet1ARouteTable"
PrivateSubnet2ARouteTable:
Value: !Ref "PrivateSubnet2ARouteTable"
Description: Private subnet 2A route table
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet2ARouteTable"
PrivateSubnet3ARouteTable:
Value: !Ref "PrivateSubnet3ARouteTable"
Description: Private subnet 3A route table
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet3ARouteTable"
PublicSubnetRouteTable:
Value: !Ref "PublicSubnetRouteTable"
Description: Public subnet route table
Export:
Name: !Sub "${AWS::StackName}-PublicSubnetRouteTable"
VPCID:
Value: !Ref "VPC"
Description: VPC ID
Export:
Name: !Sub "${AWS::StackName}-VPCID"
import json
import logging
import boto3
import subprocess
import shlex
import os
import time
from hashlib import md5
from crhelper import CfnResource
from time import sleep
logger = logging.getLogger(__name__)
helper = CfnResource(json_logging=True, log_level='DEBUG')
outdir = '/tmp'
manifest_path = '/tmp'
os.environ['PATH'] = '/opt/kubectl:/opt/awscli:' + os.environ['PATH']
cluster_name = os.environ.get('CLUSTER_NAME', None)
role_arn = os.environ.get('ROLE_ARN', None)
kubeconfig = os.path.join(outdir, 'kubeconfig')
def create_kubeconfig():
subprocess.check_call(['aws', 'eks', 'update-kubeconfig',
'--name', cluster_name,
'--kubeconfig', kubeconfig
])
def get_config_details(event):
urls = event['ResourceProperties']['Urls']
return urls
@helper.create
@helper.update
def create_handler(event, _):
print('Received event: %s' % json.dumps(event))
urls = get_config_details(event)
create_kubeconfig()
try:
cmnd = ['kubectl', 'get', 'no', '--kubeconfig', kubeconfig]
output = subprocess.check_output(cmnd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as exc:
raise Exception(exc.output)
else:
logger.info(output)
for u in urls:
kubectl('apply', u)
return event['RequestId']
def kubectl(verb, file):
try:
cmnd = ['kubectl', verb, '--kubeconfig', kubeconfig, '-f', file]
output = subprocess.check_output(cmnd, stderr=subprocess.STDOUT)
return output
except subprocess.CalledProcessError as exc:
raise Exception(exc.output)
else:
logger.info(output)
def lambda_handler(event, context):
helper(event, context)
from botocore.vendored import requests
import botocore.session
import logging
import subprocess
import os
import json
import logging
import sys
from crhelper import CfnResource
logger = logging.getLogger()
logger.setLevel(logging.INFO)
CFN_SUCCESS = "SUCCESS"
CFN_FAILED = "FAILED"
# these are coming from the kubectl layer
os.environ['PATH'] = '/opt/kubectl:/opt/awscli:' + os.environ['PATH']
outdir = os.environ.get('TEST_OUTDIR', '/tmp')
kubeconfig = os.path.join(outdir, 'kubeconfig')
logger = logging.getLogger(__name__)
helper = CfnResource(json_logging=True, log_level='DEBUG')
def lambda_handler(event, context):
helper(event, context)
@helper.poll_delete
def delete(event, context):
# delete is a special case
session = botocore.session.get_session()
eks = session.create_client('eks')
request_id = event['RequestId'] # used to generate cluster name
props = event['ResourceProperties']
physical_id = event.get('PhysicalResourceId', None)
config = props['Config']
cluster_name = ''
if physical_id:
cluster_name = physical_id
else:
raise Exception(
"unexpected error. cannot determine cluster name")
config['name'] = cluster_name
logger.info("request: %s" % config)
logger.info('deleting cluster')
eks.delete_cluster(name=cluster_name)
logger.info('waiting for cluster to be deleted...')
waiter = eks.get_waiter('cluster_deleted')
waiter.wait(name=cluster_name)
cfn_send(event, context, CFN_SUCCESS,
physicalResourceId=cluster_name)
return
@helper.poll_create
@helper.poll_update
def poll_create_update(event, context):
def cfn_error(message=None):
logger.error("| cfn_error: %s" % message)
cfn_send(event, context, CFN_FAILED, reason=message)
try:
logger.info(json.dumps(event))
stack_id = event['StackId']
request_id = event['RequestId'] # used to generate cluster name
request_type = event['RequestType']
props = event['ResourceProperties']
old_props = event.get('OldResourceProperties', {})
physical_id = event.get('PhysicalResourceId', None)
config = props['Config']
logger.info(json.dumps(config))
session = botocore.session.get_session()
eks = session.create_client('eks')
cluster_name = f"{config.get('name', 'EKS')}{request_id}"
config['name'] = cluster_name
logger.info("request: %s" % config)
if request_type == 'Create':
logger.info("creating cluster %s" % cluster_name)
try:
resp = eks.create_cluster(**config)
logger.info("create response: %s" % resp)
except Exception as e:
logger.error('Failed at creating cluster, moving on...')
logger.error(e)
elif request_type == 'Update':
logger.info("updating cluster %s" % cluster_name)
resp = eks.update_cluster_config(**config)
logger.info("update response: %s" % resp)
else:
raise Exception("Invalid request type %s" % request_type)
# wait for the cluster to become active (13min timeout)
logger.info('waiting for cluster to become active...')
waiter = eks.get_waiter('cluster_active')
waiter.wait(name=cluster_name, WaiterConfig={
'Delay': 30,
'MaxAttempts': 26
})
resp = eks.describe_cluster(name=cluster_name)
logger.info("describe response: %s" % resp)
attrs = {
'Name': cluster_name,
'Endpoint': resp['cluster']['endpoint'],
'Arn': resp['cluster']['arn'],
'CertificateAuthorityData': resp['cluster']['certificateAuthority']['data']
}
logger.info("attributes: %s" % attrs)
cfn_send(event, context, CFN_SUCCESS, responseData=attrs,
physicalResourceId=cluster_name)
except botocore.exceptions.WaiterError as e:
logger.exception(e)
return None
except KeyError as e:
cfn_error("invalid request. Missing '%s'" % str(e))
except Exception as e:
logger.exception(e)
cfn_error(str(e))
def cfn_send(event, context, responseStatus, responseData={}, physicalResourceId=None, noEcho=False, reason=None):
responseUrl = event['ResponseURL']
logger.info(responseUrl)
responseBody = {}
responseBody['Status'] = responseStatus
responseBody['Reason'] = reason or (
'See the details in CloudWatch Log Stream: ' + context.log_stream_name)
responseBody['PhysicalResourceId'] = physicalResourceId or context.log_stream_name
responseBody['StackId'] = event['StackId']
responseBody['RequestId'] = event['RequestId']
responseBody['LogicalResourceId'] = event['LogicalResourceId']
responseBody['NoEcho'] = noEcho
responseBody['Data'] = responseData
body = json.dumps(responseBody)
logger.info("| response body:\n" + body)
headers = {
'content-type': '',
'content-length': str(len(body))
}
try:
response = requests.put(responseUrl, data=body, headers=headers)
logger.info("| status code: " + response.reason)
except Exception as e:
logger.error("| unable to send response to CloudFormation")
logger.exception(e)
AWSTemplateFormatVersion: "2010-09-09"
Description: >-
EKS for us-east-1 with Kubernetes Object deployment support.
Resources:
##### START VPC RESOURCES #####
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
InstanceTenancy: default
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: BelongsTo
Value: !Ref "AWS::StackName"
- Key: Name
Value: GremlinGameDay/Gremlin/DefaultVpc
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Ref "AWS::StackName"
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref "VPC"
InternetGatewayId: !Ref "InternetGateway"
PrivateSubnet1A:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.0.0/19
AvailabilityZone: us-east-1a
Tags:
- Key: kubernetes.io/role/internal-elb
Value: 1
PrivateSubnet2A:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.32.0/19
AvailabilityZone: us-east-1b
Tags:
- Key: kubernetes.io/role/internal-elb
Value: 1
PrivateSubnet3A:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.64.0/19
AvailabilityZone: us-east-1c
Tags:
- Key: kubernetes.io/role/internal-elb
Value: 1
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.96.0/19
AvailabilityZone: us-east-1a
MapPublicIpOnLaunch: true
Tags:
- Key: kubernetes.io/role/elb
Value: 1
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.128.0/19
AvailabilityZone: us-east-1b
MapPublicIpOnLaunch: true
Tags:
- Key: kubernetes.io/role/elb
Value: 1
PublicSubnet3:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref "VPC"
CidrBlock: 10.0.160.0/19
AvailabilityZone: us-east-1c
MapPublicIpOnLaunch: true
Tags:
- Key: kubernetes.io/role/elb
Value: 1
PrivateSubnet1ARouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Private subnet 1A
- Key: Network
Value: Private
PrivateSubnet1ARoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PrivateSubnet1ARouteTable"
DestinationCidrBlock: "0.0.0.0/0"
NatGatewayId: !Ref "NATGateway1"
PrivateSubnet1ARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PrivateSubnet1A"
RouteTableId: !Ref "PrivateSubnet1ARouteTable"
PrivateSubnet2ARouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Private subnet 2A
- Key: Network
Value: Private
PrivateSubnet2ARoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PrivateSubnet2ARouteTable"
DestinationCidrBlock: "0.0.0.0/0"
NatGatewayId: !Ref "NATGateway2"
PrivateSubnet2ARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PrivateSubnet2A"
RouteTableId: !Ref "PrivateSubnet2ARouteTable"
PrivateSubnet3ARouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Private subnet 3A
- Key: Network
Value: Private
PrivateSubnet3ARoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PrivateSubnet3ARouteTable"
DestinationCidrBlock: "0.0.0.0/0"
NatGatewayId: !Ref "NATGateway3"
PrivateSubnet3ARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PrivateSubnet3A"
RouteTableId: !Ref "PrivateSubnet3ARouteTable"
PublicSubnetRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref "VPC"
Tags:
- Key: Name
Value: Public Subnets
- Key: Network
Value: Public
PublicSubnetRoute:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref "PublicSubnetRouteTable"
DestinationCidrBlock: "0.0.0.0/0"
GatewayId: !Ref "InternetGateway"
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PublicSubnet1"
RouteTableId: !Ref "PublicSubnetRouteTable"
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PublicSubnet2"
RouteTableId: !Ref "PublicSubnetRouteTable"
PublicSubnet3RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref "PublicSubnet3"
RouteTableId: !Ref "PublicSubnetRouteTable"
NAT1EIP:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NAT2EIP:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NAT3EIP:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NATGateway1:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt "NAT1EIP.AllocationId"
SubnetId: !Ref "PublicSubnet1"
NATGateway2:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt "NAT2EIP.AllocationId"
SubnetId: !Ref "PublicSubnet2"
NATGateway3:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt "NAT3EIP.AllocationId"
SubnetId: !Ref "PublicSubnet3"
##### END VPC RESOURCES #####
##### START SECURITY GROUPS #####
ClusterControlPlaneSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster communication
VpcId: !Ref "VPC"
NodeSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: Security group for all nodes in the cluster
Tags:
- Key:
Fn::Sub:
- kubernetes.io/cluster/${KubeName}
- KubeName: !GetAtt KubeCreate.Name
Value: owned
VpcId: !Ref "VPC"
NodeSecurityGroupIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow node to communicate with each other
FromPort: 0
GroupId: !Ref NodeSecurityGroup
IpProtocol: "-1"
SourceSecurityGroupId: !Ref NodeSecurityGroup
ToPort: 65535
ClusterControlPlaneSecurityGroupIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow pods to communicate with the cluster API Server
FromPort: 443
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref NodeSecurityGroup
ToPort: 443
NodeSecurityGroupFromControlPlaneIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
FromPort: 1025
GroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
ToPort: 65535
NodeSecurityGroupFromControlPlaneOn443Ingress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow pods running extension API servers on port 443 to receive communication from cluster control plane
FromPort: 443
GroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
ToPort: 443
ControlPlaneEgressToNodeSecurityGroup:
Type: "AWS::EC2::SecurityGroupEgress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow the cluster control plane to communicate with worker Kubelet and pods
DestinationSecurityGroupId: !Ref NodeSecurityGroup
FromPort: 1025
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
ToPort: 65535
ControlPlaneEgressToNodeSecurityGroupOn443:
Type: "AWS::EC2::SecurityGroupEgress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
DestinationSecurityGroupId: !Ref NodeSecurityGroup
FromPort: 443
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
ToPort: 443
##### END SECURITY GROUPS #####
##### START IAM ROLES #####
ControlPlaneProvisionRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: eksStackPolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- cloudformation:*
- eks:*
- ec2:DescribeSecurityGroups
- ec2:DescribeSubnets
- lambda:InvokeFunction
Resource: "*"
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
Resource:
- "*"
- Action: "kms:decrypt"
Effect: Allow
Resource: "*"
- Effect: Allow
Action:
- lambda:AddPermission
- lambda:RemovePermission
Resource: "*"
- Effect: Allow
Action:
- events:PutRule
- events:DeleteRule
- events:PutTargets
- events:RemoveTargets
Resource: "*"
ControlPlaneRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: eks.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
- arn:aws:iam::aws:policy/AmazonEKSServicePolicy
ControlPlanePassRole:
Type: "AWS::IAM::Policy"
Properties:
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: iam:PassRole
Resource: !GetAtt ControlPlaneRole.Arn
PolicyName: !Sub "${AWS::StackName}-ControlPlanePassRole"
Roles: [!Ref ControlPlaneProvisionRole]
# KubeApplyRole:
# Type: AWS::IAM::Role
# Properties:
# AssumeRolePolicyDocument:
# Statement:
# - Action: ["sts:AssumeRole"]
# Effect: Allow
# Principal:
# Service: [lambda.amazonaws.com]
# Version: "2012-10-17"
# Path: /
# ManagedPolicyArns:
# - Fn::Join:
# - ""
# - - "arn:"
# - Ref: AWS::Partition
# - :iam::aws:policy/AmazonEKSClusterPolicy
# - Fn::Join:
# - ""
# - - "arn:"
# - Ref: AWS::Partition
# - :iam::aws:policy/AmazonEKSServicePolicy
# Policies:
# - PolicyName: LambdaRole
# PolicyDocument:
# Version: "2012-10-17"
# Statement:
# - Action:
# - eks:CreateCluster
# - eks:DescribeCluster
# - eks:DeleteCluster
# Effect: Allow
# Resource: "*"
# - Action:
# - "logs:CreateLogGroup"
# - "logs:CreateLogStream"
# - "logs:PutLogEvents"
# Effect: Allow
# Resource: "arn:aws:logs:*:*:*"
##### END IAM ROLES #####
##### START EKS RESOURCES #####
# EKS:
# Type: "AWS::EKS::Cluster"
# Properties:
# ResourcesVpcConfig:
# SecurityGroupIds:
# - !Ref ControlPlaneSecurityGroup
# SubnetIds:
# - !Ref PrivateSubnet1A
# - !Ref PrivateSubnet2A
# - !Ref PrivateSubnet3A
# - !Ref PublicSubnet1
# - !Ref PublicSubnet2
# - !Ref PublicSubnet3
# RoleArn: !GetAtt ControlPlaneRole.Arn
# Version: "1.13"
NodeInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: "/"
Roles:
- !Ref NodeInstanceRole
NodeInstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
NodeGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
DesiredCapacity: "3"
LaunchTemplate:
LaunchTemplateName: !Sub "${AWS::StackName}"
Version: !GetAtt "NodeGroupLaunchTemplate.LatestVersionNumber"
MaxSize: "3"
MinSize: "3"
Tags:
- Key: Name
PropagateAtLaunch: "true"
Value: !Sub "${AWS::StackName}-ng"
- Key:
Fn::Sub:
- kubernetes.io/cluster/${KubeName}
- KubeName: !GetAtt KubeCreate.Name
PropagateAtLaunch: "true"
Value: owned
VPCZoneIdentifier:
- !Ref PublicSubnet1
- !Ref PublicSubnet2
- !Ref PublicSubnet3
UpdatePolicy:
AutoScalingRollingUpdate:
MaxBatchSize: "1"
MinInstancesInService: "0"
NodeGroupLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateData:
IamInstanceProfile:
Arn: !GetAtt "NodeInstanceProfile.Arn"
ImageId: ami-0990970b056c619eb
InstanceType: m5.large
NetworkInterfaces:
- AssociatePublicIpAddress: true
DeviceIndex: 0
Groups:
- !Ref NodeSecurityGroup
- !Ref ClusterControlPlaneSecurityGroup
UserData:
Fn::Base64:
Fn::Sub:
- |
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${KubeName}
/opt/aws/bin/cfn-signal --exit-code $? \
--stack ${AWS::StackName} \
--resource NodeGroup \
--region ${AWS::Region}
- KubeName: !GetAtt KubeCreate.Name
LaunchTemplateName: !Sub "${AWS::StackName}"
# NodeGroup:
# Type: AWS::AutoScaling::AutoScalingGroup
# Properties:
# DesiredCapacity: 3
# LaunchConfigurationName: !Ref NodeLaunchConfig
# MinSize: 3
# MaxSize: 3
# VPCZoneIdentifier:
# - !Ref PrivateSubnet1A
# - !Ref PrivateSubnet2A
# - !Ref PrivateSubnet3A
# CreationPolicy:
# ResourceSignal:
# Count: 3
# Timeout: PT15M
# UpdatePolicy:
# AutoScalingRollingUpdate:
# WaitOnResourceSignals: false
# PauseTime: PT0S
# SuspendProcesses:
# - HealthCheck
# - ReplaceUnhealthy
# - AZRebalance
# - AlarmNotification
# - ScheduledActions
# AutoScalingScheduledAction:
# IgnoreUnmodifiedGroupSizeProperties: true
# NodeLaunchConfig:
# Type: AWS::AutoScaling::LaunchConfiguration
# Properties:
# BlockDeviceMappings:
# - DeviceName: /dev/xvda
# Ebs:
# DeleteOnTermination: true
# VolumeSize: 20
# VolumeType: gp2
# ImageId: ami-0990970b056c619eb
# InstanceType: m5.large
# IamInstanceProfile:
# Ref: NodeInstanceProfile
# SecurityGroups:
# - !Ref NodeSecurityGroup
# UserData:
# Fn::Base64:
# Fn::Sub:
# - |
# #!/bin/bash
# set -o xtrace
# /etc/eks/bootstrap.sh ${KubeName}
# /opt/aws/bin/cfn-signal --exit-code $? \
# --stack ${AWS::StackName} \
# --resource NodeGroup \
# --region ${AWS::Region}
# - KubeName: !GetAtt KubeCreate.Name
##### START CUSTOM RESOURCES #####
CrhelperLayer:
Type: AWS::Lambda::LayerVersion
Properties:
CompatibleRuntimes:
- python3.6
- python3.7
Content:
S3Bucket: amsxbg-us-east
S3Key: layers/crhelper/lambda.zip
KubectlLayer:
Type: AWS::Lambda::LayerVersion
Properties:
Content:
S3Bucket: amsxbg-us-east
S3Key: layers/kubecomplete/lambda.zip
KubeCreateLambda:
Type: AWS::Lambda::Function
Properties:
Handler: lambda_function.lambda_handler
MemorySize: 1024
Role: !GetAtt ControlPlaneProvisionRole.Arn
Runtime: python3.7
Timeout: 900
Layers: [!Ref KubectlLayer, !Ref CrhelperLayer]
Code:
S3Bucket: amsxbg-us-east
S3Key: functions/kubecreate/lambda.zip
KubeCreate:
Type: "Custom::KubeCreate"
Version: "1.0"
Properties:
ServiceToken: !GetAtt KubeCreateLambda.Arn
Config:
roleArn: !GetAtt ControlPlaneRole.Arn
name: GremlinGameDay
resourcesVpcConfig:
securityGroupIds:
- !GetAtt ClusterControlPlaneSecurityGroup.GroupId
subnetIds:
- !Ref PrivateSubnet1A
- !Ref PrivateSubnet2A
- !Ref PrivateSubnet3A
- !Ref PublicSubnet1
- !Ref PublicSubnet2
- !Ref PublicSubnet3
KubeNodeJoinLambda:
Type: AWS::Lambda::Function
Properties:
Handler: lambda_function.lambda_handler
MemorySize: 1024
Role: !GetAtt ControlPlaneProvisionRole.Arn
Runtime: python3.7
Timeout: 900
Layers: [!Ref KubectlLayer, !Ref CrhelperLayer]
Code:
S3Bucket: amsxbg-us-east
S3Key: functions/kubenodejoin/lambda.zip
Environment:
Variables:
CLUSTER_NAME: !GetAtt KubeCreate.Name
KubeNodeJoin:
DependsOn: [NodeInstanceRole, KubeCreate, NodeGroup, KubeApplyLambda]
Type: Custom::KubeNodeJoin
Properties:
ServiceToken: !GetAtt KubeNodeJoinLambda.Arn
Manifest:
Fn::Join:
- ""
- - '[{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"aws-auth","namespace":"kube-system"},"data":{"mapRoles":"[{\"rolearn\":\"'
- Fn::GetAtt:
- NodeInstanceRole
- Arn
- \",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]}]","mapUsers":"[]","mapAccounts":"[]"}}]
KubeApplyLambda:
DependsOn: [NodeGroup, KubeCreate]
Type: AWS::Lambda::Function
Properties:
Handler: lambda_function.lambda_handler
MemorySize: 1024
Role: !GetAtt ControlPlaneProvisionRole.Arn
Runtime: python3.7
Timeout: 900
Layers: [!Ref KubectlLayer, !Ref CrhelperLayer]
Code:
S3Bucket: amsxbg-us-east
S3Key: functions/kubeapply/lambda.zip
Environment:
Variables:
CLUSTER_NAME: !GetAtt KubeCreate.Name
KubeApply:
DependsOn: KubeCreate
Type: "Custom::KubeApply"
Version: "1.0"
Properties:
ServiceToken: !GetAtt KubeApplyLambda.Arn
Urls:
- "https://gist.githubusercontent.com/pahud/54906d24e7889a0adaed72ce4d4baefe/raw/680659932542f5b155fa0f4d2590896729784045/nginx.yaml"
##### END CUSTOM RESOURCES #####
Outputs:
EKSClusterName:
Description: EKS Cluster Name
Value: !GetAtt KubeCreate.Name
NAT1EIP:
Description: NAT 1 IP address
Value: !Ref "NAT1EIP"
Export:
Name: !Sub "${AWS::StackName}-NAT1EIP"
NAT2EIP:
Description: NAT 2 IP address
Value: !Ref "NAT2EIP"
Export:
Name: !Sub "${AWS::StackName}-NAT2EIP"
NAT3EIP:
Description: NAT 3 IP address
Value: !Ref "NAT3EIP"
Export:
Name: !Sub "${AWS::StackName}-NAT3EIP"
PrivateSubnet1AID:
Description: Private subnet 1A ID in Availability Zone 1
Value: !Ref "PrivateSubnet1A"
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet1AID"
PrivateSubnet2AID:
Description: Private subnet 2A ID in Availability Zone 2
Value: !Ref "PrivateSubnet2A"
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet2AID"
PrivateSubnet3AID:
Description: Private subnet 3A ID in Availability Zone 3
Value: !Ref "PrivateSubnet3A"
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet3AID"
PublicSubnet1ID:
Description: Public subnet 1 ID in Availability Zone 1
Value: !Ref "PublicSubnet1"
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet1ID"
PublicSubnet2ID:
Description: Public subnet 2 ID in Availability Zone 2
Value: !Ref "PublicSubnet2"
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet2ID"
PublicSubnet3ID:
Description: Public subnet 3 ID in Availability Zone 3
Value: !Ref "PublicSubnet3"
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet3ID"
PrivateSubnet1ARouteTable:
Value: !Ref "PrivateSubnet1ARouteTable"
Description: Private subnet 1A route table
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet1ARouteTable"
PrivateSubnet2ARouteTable:
Value: !Ref "PrivateSubnet2ARouteTable"
Description: Private subnet 2A route table
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet2ARouteTable"
PrivateSubnet3ARouteTable:
Value: !Ref "PrivateSubnet3ARouteTable"
Description: Private subnet 3A route table
Export:
Name: !Sub "${AWS::StackName}-PrivateSubnet3ARouteTable"
PublicSubnetRouteTable:
Value: !Ref "PublicSubnetRouteTable"
Description: Public subnet route table
Export:
Name: !Sub "${AWS::StackName}-PublicSubnetRouteTable"
VPCID:
Value: !Ref "VPC"
Description: VPC ID
Export:
Name: !Sub "${AWS::StackName}-VPCID"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment