Skip to content

Instantly share code, notes, and snippets.

@pydevops
Last active August 20, 2024 00:55
Show Gist options
  • Save pydevops/83ee7834c212d2e8933624dd9ff1ab32 to your computer and use it in GitHub Desktop.
Save pydevops/83ee7834c212d2e8933624dd9ff1ab32 to your computer and use it in GitHub Desktop.
how to set up kubectl on laptop for private endpoint ONLY k8s cluster (AWS/GCP/On prem)

HTTP tunnel

On prem k8s cluster set up with bastion vm

  1. Create a bastion vm in your data center or in cloud with connectivity set up (usually vpn) to the on prem data center.
  2. Install tinyproxy on the bastion vm and pick a random port as it would be too easy for spam bot with default 8888, set up as systemd service according to https://nxnjz.net/2019/10/how-to-setup-a-simple-proxy-server-with-tinyproxy-debian-10-buster/. Make sure it works by validating with curl --proxy http://127.0.0.1:<tinyproxy-port> https://httpbin.org/ip. And I don't use any user authentication for proxy, so I locked down the firewall rules with my laptop IP/32.
  3. Download the kubeconfig file for the k8s cluster to your laptop
  4. From your laptop, run
HTTPS_PROXY=<bastion-external-ip>:<tinyproxy-port> KUBECONFIG=my-kubeconfig kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-node-0   Ready    control-plane,master   32h   v1.20.4
k8s-node-1   Ready    <none>                 32h   v1.20.4
k8s-node-2   Ready    <none>                 32h   v1.20.4
k8s-node-3   Ready    <none>                 32h   v1.20.4
k8s-node-4   Ready    <none>                 32h   v1.20.4
k8s-node-5   Ready    <none>                 32h   v1.20.4

Private GKE cluster with HTTP proxy solutions

According to private GKE cluster, At this point, these are the only IP addresses that have access to the control plane:

  • The primary range of my-subnet-0.
  • The secondary range used for Pods.

Hence, we can use a bastion vm in the primary range or a pod from the secondary range.

My own hackish way

Given private GKE cluster with public endpoint access disabled, here is one hack I did with Cloud IAP SSH forwarding via an internal bastion vm. In this workaroud it is not using any HTTP proxy and no external IP address from user VPC. It works well for one cluster, but I would aim for deploying tinyproxy for more than one cluster as it is a cleaner solution without handling TLS SAN.

create a private GKE cluster

Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp for the latest info. e.g.

gcloud container clusters create "$CLUSTER_NAME" \
  --region ${REGION} \
  --network ${NETWORK} \
  --subnetwork ${SUBNET} \
  --machine-type "${GKE_NODE_TYPE}" \
  --num-nodes=1 \
  --enable-autoupgrade \
  --enable-autorepair \
  --preemptible \
  --enable-ip-alias \
  --cluster-secondary-range-name=pod-range \
  --services-secondary-range-name=service-range \
  --enable-private-nodes \
  --enable-private-endpoint \
  --enable-master-authorized-networks  \
  --master-ipv4-cidr= 172.16.0.32/28

# Get the kubectl credentials for the GKE cluster.
KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials "$CLUSTER_NAME" --region "$REGION"

create a private compute instance "bastion"

with only internal IP

enable and set up Cloud IAP

in GCP console, grant users/group that can access the private instance from the last step

on the laptop, start the SSH forwarding proxy at local port 8443 via CloudIAP tunnel

e.g. 172.16.0.66 is the private master endpoint, The SSH traffic is tunnelled via Cloud IAP in TLS, then port forwarding to the k8s master API endpoint. gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "bastion" --tunnel-through-iap --ssh-flag="-L 8443:172.16.0.66:443"

on the laptop, modify the .kube/dev

kubernetes.default and kubernetes are allowed for port server: https://kubernetes.default:8443

on the laptop, modify the /etc/hosts

Please append the following line 127.0.0.1 kubernetes kubernetes.default

on the laptop, happy kubectl from here.

KUBECONFIG=~/.kube/dev kubectl get po --all-namespaces

Private only EKS cluster

Very much like the GCP Cloud IAP, except it uses AWS SSM and bastion to create a tunnel. Assuming the bastion 'subnet is added to the EKS control plane's cluster security group's inbound rule on tcp port 443.

# start a tunnel, local port 4443, traffic will be forwarded to the private EKS endpoint 
aws ssm start-session --target i-bastion --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters host=<eks-controlplane-endpoint>,portNumber=443,localPortNumber=4443

on the laptop 's /etc/hosts

127.0.0.1 localhost kubernetes kubernetes.default.svc kubernetes.default.svc.cluster.local

modify kubeconfig with the server pointing to the local port

server: https://kubernetes.default.svc.cluster.local:4443
@pydevops
Copy link
Author

pydevops commented Jul 13, 2021

Interesting, I have found this gcp recommended practice as well:
https://cloud.google.com/architecture/creating-kubernetes-engine-private-clusters-with-net-proxies

Essentially deploy a pod that provides an nginx proxy to the api server residing in the google managed control plane, example of how proxy is used in this particular case:

$  https_proxy=10.244.128.9:8118 kubectl -n qbert-dev get secrets

Good point thanks, I had it linked as well in the gist. The way it works is exposing a k8s LB which is implemented as a ILB (internal load balancer), with a backend service of privproxy deployment. The https_proxy address needs to follow RFC 1918 as an internal address.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment