I run several K8S cluster on EKS and by default do not setup inbound SSH to the nodes. Sometimes I need to get into each node to check things or run a one-off tool.
Rather than update my terraform, rebuild the launch templates and redeploy brand new nodes, I decided to use kubernetes to access each node directly.
https://github.com/alexei-led/nsenter
Attached is a DaemonSet manifest that mounts /home/ec2-user/.ssh/authorized_keys
into a pod on each node. The pod will then create a new SSH keypair for each node, removing old entries and install the public key.
Update the manifest to reflect the proper user for your nodes. I use the AWS Linux AMI so the user is ec2-user
, yours may be something else depending on AMI.
kubectl apply -f daemonset.yml
# kubectl exec -it -n kube-system node-connect-q529c connect
Last login: Mon Aug 24 16:32:25 2020 from localhost
__| __|_ )
_| ( / Amazon Linux 2 AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-2/
14 package(s) needed for security, out of 40 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-172-16-38-42 ~]$ whoami
ec2-user
[ec2-user@ip-172-16-38-42 ~]$ w
16:40:20 up 39 days, 9:43, 1 user, load average: 0.43, 0.58, 0.49
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
ec2-user pts/0 localhost 16:40 4.00s 0.02s 0.00s w
[ec2-user@ip-172-16-38-42 ~]$