Solution tested with MacOS client.
Note: The EC2 instance created by this CloudFormation template is pre-configured to provide the following:
- 64bit (x86) Ubuntu 22.04 in us-west-2 region
- Docker Engine
- EC2 Instance Connect support
- AWS Systems Manager (SSM) support
- A Security Group with port 6443 open to 0.0.0.0/0
From your local machine, run the following.
stack_name=kind-ec2-$(date +"%y%m%d%H%M")
aws cloudformation create-stack \
--stack-name ${stack_name} \
--template-url https://ven-eco.s3.amazonaws.com/cfn/utils/cfn-jumpbox-ubuntu.yaml \
--parameters ParameterKey=InstanceType,ParameterValue=t3.medium \
--capabilities CAPABILITY_IAM
aws cloudformation wait stack-create-complete --stack-name ${stack_name}
instance_id=$(
aws cloudformation describe-stacks \
--stack-name ${stack_name} \
--query "Stacks[0].Outputs[?OutputKey=='InstanceId'].OutputValue" \
--output text \
)
This will take about 5 mins to return a prompt
Note: In the next section, uncomment the networking:disableDefaultCNI: true
lines from the Cluster manifest if you intend to replace the CNI (e.g. Cilium).
Also, be aware the use of port 6443 allows your cluster to be externally accessible at a known endpoint that matches the EC2 Security Group configuration, however it does restrict you to one cluster per VM at this time.
From your local machine, establish a remote connection to your Ubuntu VM.
aws ec2-instance-connect ssh --os-user ubuntu --instance-id ${instance_id}
From within your remote connection, run the following.
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
k8s_name=k8s-$(date +"%y%m%d%H%M")
cat <<EOF | kind create cluster --config -
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ${k8s_name}
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 6443
hostPort: 6443
- role: worker
- role: worker
# networking:
# disableDefaultCNI: true
EOF
Close the remote connection.
exit
The following code assumes ~/.ssh/id_rsa
and ~/.ssh/id_rsa.pub
is present on your local machine.
If this is not the case, you should run the following command, accepting all defaults
ssh-keygen
From your local machine, download helper script (see here for details)
wget -O ~/.ssh/aws-ssm-ec2-proxy-command.sh https://raw.githubusercontent.com/qoomon/aws-ssm-ec2-proxy-command/master/ec2-instance-connect/aws-ssm-ec2-proxy-command.sh
chmod +x ~/.ssh/aws-ssm-ec2-proxy-command.sh
From your local machine, copy the kubeconfig
mkdir ~/.kube/
scp -i ~/.ssh/id_rsa \
-o ProxyCommand="~/.ssh/aws-ssm-ec2-proxy-command.sh %h %r %p ~/.ssh/id_rsa.pub" \
ubuntu@${instance_id}:~/.kube/config ~/.kube/config-${instance_id}
Note: In the next section, you will observe we re-use locally the port number KinD assigned (server:
) so the kubeconfig file requires no alteration.
Also, the aws ssm start-session
command is sent to the background so the prompt is not tied up and our variables can be reused.
From your local machine, set up SSM port forwarding (see here for details) from your cluster back to your local machine.
local_port=$(yq '.clusters[0].cluster.server' ~/.kube/config-${instance_id} | cut -d':' -f3)
aws ssm start-session \
--target ${instance_id} \
--document-name AWS-StartPortForwardingSession \
--parameters "{\"portNumber\":[\"6443\"],\"localPortNumber\":[\"${local_port}\"]}" &
Once you see Waiting for connections...
, you will need to hit RETURN to get your prompt back
From your local machine, use kubectl
as you normally would.
export KUBECONFIG=~/.kube/config-${instance_id}
kubectl cluster-info
You should ensure that any background tasks, such as the port-forwarding job, are killed.
jobs # list background tasks
kill <PID> # provide the PID to kill