- Create a bastion vm in your data center or in cloud with connectivity set up (usually vpn) to the on prem data center.
- Install tinyproxy on the bastion vm and pick a random port as it would be too easy for spam bot with default 8888, set up as systemd service according to https://nxnjz.net/2019/10/how-to-setup-a-simple-proxy-server-with-tinyproxy-debian-10-buster/. Make sure it works by validating with
curl --proxy http://127.0.0.1:<tinyproxy-port> https://httpbin.org/ip
. And I don't use any user authentication for proxy, so I locked down the firewall rules with my laptop IP/32. - Download the kubeconfig file for the k8s cluster to your laptop
- From your laptop, run
HTTPS_PROXY=<bastion-external-ip>:<tinyproxy-port> KUBECONFIG=my-kubeconfig kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node-0 Ready control-plane,master 32h v1.20.4
k8s-node-1 Ready <none> 32h v1.20.4
k8s-node-2 Ready <none> 32h v1.20.4
k8s-node-3 Ready <none> 32h v1.20.4
k8s-node-4 Ready <none> 32h v1.20.4
k8s-node-5 Ready <none> 32h v1.20.4
According to private GKE cluster, At this point, these are the only IP addresses that have access to the control plane:
- The primary range of my-subnet-0.
- The secondary range used for Pods.
Hence, we can use a bastion vm in the primary range or a pod from the secondary range.
- tinyproxy with bastion vm
- https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/safer_cluster_iap_bastion
- https://medium.com/google-cloud/accessing-gke-private-clusters-through-iap-14fedad694f8
- https://medium.com/google-cloud/gke-private-cluster-with-a-bastion-host-5480b44793a7
- privoxy in cluster
Given private GKE cluster with public endpoint access disabled, here is one hack I did with Cloud IAP SSH forwarding via an internal bastion vm. In this workaroud it is not using any HTTP proxy and no external IP address from user VPC. It works well for one cluster, but I would aim for deploying tinyproxy for more than one cluster as it is a cleaner solution without handling TLS SAN.
Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp for the latest info. e.g.
gcloud container clusters create "$CLUSTER_NAME" \
--region ${REGION} \
--network ${NETWORK} \
--subnetwork ${SUBNET} \
--machine-type "${GKE_NODE_TYPE}" \
--num-nodes=1 \
--enable-autoupgrade \
--enable-autorepair \
--preemptible \
--enable-ip-alias \
--cluster-secondary-range-name=pod-range \
--services-secondary-range-name=service-range \
--enable-private-nodes \
--enable-private-endpoint \
--enable-master-authorized-networks \
--master-ipv4-cidr= 172.16.0.32/28
# Get the kubectl credentials for the GKE cluster.
KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials "$CLUSTER_NAME" --region "$REGION"
with only internal IP
in GCP console, grant users/group that can access the private instance from the last step
e.g. 172.16.0.66 is the private master endpoint, The SSH traffic is tunnelled via Cloud IAP in TLS, then port forwarding to the k8s master API endpoint.
gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "bastion" --tunnel-through-iap --ssh-flag="-L 8443:172.16.0.66:443"
kubernetes.default and kubernetes are allowed for port
server: https://kubernetes.default:8443
Please append the following line
127.0.0.1 kubernetes kubernetes.default
KUBECONFIG=~/.kube/dev kubectl get po --all-namespaces
Very much like the GCP Cloud IAP, except it uses AWS SSM and bastion to create a tunnel. Assuming the bastion 'subnet is added to the EKS control plane's cluster security group's inbound rule on tcp port 443.
# start a tunnel, local port 4443, traffic will be forwarded to the private EKS endpoint
aws ssm start-session --target i-bastion --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters host=<eks-controlplane-endpoint>,portNumber=443,localPortNumber=4443
127.0.0.1 localhost kubernetes kubernetes.default.svc kubernetes.default.svc.cluster.local
server: https://kubernetes.default.svc.cluster.local:4443
@pydevops
below are the steps i have followed and it worked...previously i have exited the priavte VM. Sorry for that, onething we can improve is we can run in background below port-forward tunnel ?
on the other terminal window
2. get the get-credentials like below